Why Effective IT Leadership Must Have Inclusive Technology Systems

This is a good, informative read; something all in my humble opinion, should emulate:

Original source:

Why Effective IT Leadership Must Have Inclusive Technology Systems

One-to-one programs are changing the learning experience for every student, including those with disabilities.

by

Christine Fox

Christine Fox is a project director at the Center on Inclusive Technology & Education Systems at CAST. She is a former classroom teacher and reading coach with over 17 years of experience in educational technology. by

Samantha Reid

Samantha Reid is an ISTE-certified educational technology coordinator for Jenks Public Schools in Oklahoma. She holds a master’s degree in curriculum and instruction with an emphasis on technology integration in education. LISTEN

05:08

Cerebral palsy confined Mercy, a third grader at Jenks (Okla.) Public Schools, to a wheelchair and prevented her from speaking or fully participating in learning activities. That was until a team of teachers, therapists and technology staff worked to find a solution.

They attached a tablet loaded with text-to-talk software to her chair, and those educational technology tools changed her life. She is now an active student who uses the big toe on her right foot to type, while the tablet reads out loud.

Mercy is among the 7.2 million students in K-12 who received special education services under the Individuals with Disabilities Education Act in the 2020-2021 school year.

Click the banner below for exclusive content about emerging technologies in K–12.

Accessible Ed Tech Should Be Provided to All Students

Effective IT leaders must promote a balanced and inclusive technology ecosystem that examines assistive technology (AT), educational technology and IT that support students like Mercy.

Where is your school on its inclusive technology journey? Inclusive educational systems are not simply nice to have. They are a civil right for students with disabilities, according to Section 504 of the Americans with Disabilities Act.

Students with or without individual education plans or 504 plans can leverage accessibility features such as text-to-speech, speech recognition and closed captioning to create inclusive and personalized learning experiences. Such features should be provided to all students.

How do your teams ensure that all students, including those with disabilities, have seamless access to instructional materials, educational tools and resources for learning?

LEARN MORE: What does assistive tech look like with one-to-one programs?

Ask These Questions Before Getting Started on Accessible Ed Tech

We have seen schools and districts proactively include the needs of students with disabilities in their technology and curriculum planning, and we’ve also seen schools get started with a technology planning team.

If you are just beginning to examine the equity of your district’s technology infrastructure and practices, your team should ask the following questions:

• Do we have a technology planning team? 

• Is there a leader from AT on the technology planning team? 

• Have AT users tested the accessibility features offered to students or under consideration? 

• Does our IT or ed tech team regularly meet with special education teams? 

We are proud of our participation in the CITES framework development process and the enhanced focus on strategies to support all students.”

Stacey Butterfield Superintendent, Jenks Public Schools

Oklahoma School District Gets Help Creating Inclusive Learning

CAST, the nonprofit educational research and development organization that created the Universal Design for Learning framework, launched the Center on Inclusive Technology & Education Systems (CITES) in 2018.

The center promotes a framework of evidence-based practices aligned to the 2017 National Education Technology Plan. The framework is designed to empower school districts to create and sustain inclusive technology systems that serve all students, including students who require AT or accessible educational materials. 

Funded by the U.S. Department of Education’s Office of Special Education Programs, this work focuses on identifying how the technologies are acquired and implemented and how students are supported.

In 2020, Jenks Public Schools was one of five school districts to partner with CITES to remove barriers for students like Mercy and end the stigma of being “different” for using customized learning tools.

Using the CITES framework, the team created a district technology plan  that included input from all stakeholders. It set goals to update procurement and curriculum adoption practices and, most important, change the culture regarding accessibility.

WATCH: See how ed tech prepares students with disabilities for careers. 

The district also has a vendor survey for all curriculum adoption, requiring vendors to share accessibility features. Products are eliminated from consideration if they do not meet specific accessibility ratings.

To support the shift in mindset, the ed tech team posts weekly tips to increase understanding of small changes teachers can make to improve access for all students. Jenks is committed to this journey and continues to drive change to meet all students’ needs and even exceed expectations.

Implementing the CITES self-assessments helped the leadership team identify areas with accessibility gaps and change those practices. “We are proud of our participation in the CITES framework development process and the enhanced focus on strategies to support all students,” says Jenks Superintendent Stacey Butterfield.

All of our students deserve appropriate time, energy and financial support to reduce barriers and increase opportunities for their achievement. Please take the time to review your district’s inclusive technology policies and practices, and visit cites.cast.org for more resources. 

FG TRADE/GETTY IMAGES

More On

COMPLIANCE

DIGITAL TRANSFORMATION

INNOVATION

LEADERSHIP

PROCUREMENT

Related Articles

Management

What to Consider When Investing in Collaboration Technologies for the Classroom

https://www.helenkeller.org/7-common-accessibility-errors-on-websites-and-how-to-fix-them/

Learn about the most common accessibility errors on websites and how to improve them to build better website pages or call out issues on existing websites.

By Aliana Manteria, Megan Dausch (Accessibility Specialist), Matthew Salaverry, and Tara Brown-Ogilvie (Accessibility Specialist) | May 18, 2023
Accessibility affects everyone: individuals who are DeafBlind, people with combined vision and hearing loss, people with varying ranges of vision and hearing loss, those with other disabilities, and people with full range of sight and hearing. But oftentimes, websites implement (or fail to implement) elements on their pages that make websites inaccessible to all audiences. This makes it difficult for people to navigate a website smoothly and can even result in a lawsuit.
The Global Accessibility Awareness Day home page shares that according to a 2020 WebAIM analysis of one million web pages, 98.1% of home pages had at least one WCAG 2.0 failure and there were an average of 60.9 errors per home page. WCAG stands for “Web Content Accessibility Guidelines.” The causes of most common accessibility failures are from low contrast text, missing image alt text, empty links, missing input form labels, empty buttons, and missing document language.
You have likely browsed websites or even host a website that has a number of accessibility errors. Whether you’re a developer, designer, or a regular Internet user, you have the power to make digital spaces more accessible.
Here, we break down 7 common digital accessibility errors and how to make improvements to them to make all websites more accessible.
Low Contrast Text
Missing Image Alternative Text
Empty Links / Missing Link text
Unclear Link Language
Empty Buttons
Missing Form Input Labels
Missing Document Language
1. Low Contrast Text
Low contrast on websites shows up when there is not adequate differentiation between colors on a page. For example, a light-colored text on a light background has poor contrast, and is likely difficult or even impossible to read, regardless of your level of vision. Poor color contrast can affect anyone, whether someone has vision loss, is blind, has color blindness, or has near perfect vision.
One way to improve low contrast on your website is by increasing contrast between text and background through a contrast checker so content on your website is more legible. According to the Web Content Accessibility Guidelines version 2.2, you must have a contrast ratio of at least 4:5:1 to be AA compliant. You can achieve this ratio by using the contrast checker tool to compare the colors you plan to use to see if they meet accessibility standards and reach the required contrast ratio.
You can also improve color contrast by testing webpages with people across the visual spectrum to get their direct feedback on color combination choices and to gauge the level of comfort people feel while reading information on a website.
2. Missing Image Alternative Text
Alternative text (or “alt text” for short) is descriptive text that describes non-text content, such as images. People who are DeafBlind, blind or have low vision may use assistive technology to access digital content. Examples of assistive technology include screen readers (which read content out loud), screen magnifiers (which enlarge text), and braille displays.
When alt text is missing from an image, assistive technology users and screen reader users will not receive the same information as sighted users, and may not have any way of knowing what’s in the image. They may try to rely on the image URL or the image’s file name, which may read as something unspecific like “pic.jpg.” This information does not provide meaningful information for screen reader users, which makes it inaccessible. Missing alt text can also affect people who can view their screens too, because if their Internet connection is weak and they’re unable to load an image, they will see the alt text. Another reason to make sure alt text is present!
One way to improve missing alt text on a website is by writing descriptive, relevant, and concise descriptions of the contents of images on a website and adding that alt text into the code. It is best to write about the main ideas of what’s happening in an image within the context of the web page, and best to avoid describing irrelevant information that doesn’t pertain to the main idea.
Let’s say there is an image of people on a website that’s advertising a relay event. Depending on the context, it may be best to describe the alt text by saying something like this: “a group of smiling people standing outside and wearing relay t-shirts.” Don’t add irrelevant details to the alt text, such as describing peoples’ shoelaces or the brands of watches on their arms, as that’s irrelevant to the purpose of the relay photo being on a page. Make sure the details added to the alt text are true to what’s happening in the image and not deceptive, too. Learn more about writing alt text here.
By understanding the foundations of alt text and implementing it in photos on your website, you will give users who use assistive technology the opportunity to access images on your website and provide blind people and individuals with low vision with a well-informed user experience. 
If you want to find out if other websites lack alt text in photos to bring this topic to website administrators’ attention, you can right click on the image (or select “Control-Shift-I” on keyboard) and then select “inspect” it to see if alt text is in the code.
3. Empty Links / Missing Link Text
Links perform an action on a web page and will redirect you to a new page or to a different section within the same page.
Empty links are links that do not have text anchored to them. You can find this neglected on many websites, such as on social media icons that link to social media websites. If the social media icon does not have text assigned to it describing the icon, those who use assistive technology won’t know what the link leads to.
One way to improve missing link text, according to this Empty Link article by Equalize Digital, is by adding an aria attribute to the <a> tag in the website’s code so people using assistive technology can have access to the name of the link or icon.
It’s also important for developers to think about the purpose of links and use the correct HTML to achieve that. This will make for a better user experience for those who access the web via keyboard and assistive technology.
4. Unclear Link Language
Links need to have phrases attached to them that convey context of where the link will lead to. People who use screen readers have the ability to tab through links on a page, so they have the ability to skim through the names of several links. The issue begins when a screen reader reads a link list and it says things like “click read,” “read more,” and “start now.” This link language is unclear when read out of context.
One way of improving link language is by using specific and concise language that identifies the subject matter of the link, such as naming the links with phrases like “read the salad recipe,” “read more about dressings,” and “start nutrition course” as opposed to unclear phrases like “read more.” Doing this helps everyone — those who use screen readers, those use braille displays, those who can see and are skimming web pages fast — better understand where links will lead and aids in a better and more accessible user experience.
5. Empty Buttons
A button on a website performs an action on a web page you are currently on. According to this a11y-101 article, buttons are used to submit a form, open a layer or menu, close a layer, or close a pop-up. A pop-up layers over the existing web page. For example, when you enter a clothing store website, a pop-up will usually appear on the page a few seconds later that says something like “Get 10% off on your next purchase – just input your email address!” Usually beneath that copy there will be buttons on the pop-up that say something like “Give me 10% Off” or “No I don’t want to save.” When you select either button, the button will perform the action it’s linked to.
When a button in a form, menu, layer, or pop-up is empty, it can create obstacles for people who use assistive technology. It can prohibit people from accessing the information on the button, as they may not know what the information on the button is supposed to say.
One way to improve empty buttons is by working with elements. According to this Empty Button article by Equalize Digital, “You will need to either: add text content within an empty <button> element, add a value attribute to an <input> that is missing one, or add alternative text to a button image.” This will allow screen reader users to access the information in buttons. Overall, it’s important to make sure that buttons (as well as links) are coded correctly to prevent complicating the user experience for people who use assistive technology.
6. Missing Form Input Labels
A form is used on a website to collect information from a user, such as an email address or credit card details. Without labels on a form, a person will not understand what the form is asking for and it can cause people confusion or hesitation with filling out the form. A person may attempt to fill out the form, but without labels, they may not know where to input their name, address, or other important information that is typically listed in forms, which results in an action that’s potentially not taken.
One way to improve missing input labels on a website is by making the form field have “a visible label inside a <label> element” according to a Practical eCommerce article. Doing this will help people have a smoother and more accessible user experience while filling out forms.

7. Missing Document Language
The language used on a website needs to be specified so people who use screen readers can listen to a voice say the content in the correct pronunciation of the language.
One way to improve missing document language is by using the HTML lang attribute in the back end of the website, such as <html lang=”en”> for English or <html lang=”es”> for Spanish. This will make information on websites more accessible for screen reader users. Learn more about the HTML lang attribute here.

&amp;lt;img decoding=&quot;async&quot; width=&quot;1024&quot; height=&quot;683&quot; src=&quot;https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-1024×683.jpg&quot; alt=&quot;An open laptop and a braille display on a desk&quot; class=&quot;wp-image-9589&quot; srcset=&quot;https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-1024×683.jpg 1024w, https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-300×200.jpg 300w, https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-768×512.jpg 768w, https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-1536×1024.jpg 1536w, https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-2048×1365.jpg 2048w, https://www.helenkeller.org/wp-content/uploads/2023/05/braille-display-accessibility1-600×400.jpg 600w&quot; sizes=&quot;(max-width: 1024px) 100vw, 1024px&quot; /&amp;gt;
Conclusion
At Helen Keller Services, we strive to make our website as accessible as possible to all audiences. We hope you can use this article as a motivator to take action on your own website, or to contact companies with inaccessible websites and ask them to make changes that will ultimately make digital content more accessible for everyone.
It’s imperative to implement best accessibility practices and to include people with disabilities in every step of a website project to show your commitment to accessibility. As an organization that works with DeafBlind, blind, and low vision individuals, we take pride in creating and fostering both physical and digital accessible environments.
If you’d like to learn more about our mission, visit helenkeller.org. If you’d like to learn more about how we practice digital accessibility, visit our Accessibility Statement page.
If you have questions about this article, contact help@helenkeller.org.

https://webaim.org/resources/contrastchecker/

https://www.w3.org/TR/WCAG22/

https://accessibility.huit.harvard.edu/describe-content-images

https://equalizedigital.com/accessibility-checker/empty-link/

https://a11y-101.com/design/button-vs-link

https://equalizedigital.com/accessibility-checker/empty-button/

6 Accessibility Basics Cause 97% of Errors

https://www.w3schools.com/tags/att_global_lang.asp

Homepage

Accessibility Statement

help@helenkeller.org

How Screen-Reader Users Type on and Control Mobile Devices

https://www.nngroup.com/articles/screen-reader-type-control/?

How Screen-Reader Users Type on and Control Mobile Devices
“I can do everything you do as a sighted person, but it takes me a little bit longer and it takes me an alternative way of finding it and getting to it. […] The biggest thing we try to help blind people understand is [that] they can do it, but it’s a very different way of doing it.”
— Screen-reader user who is completely blind
A major goal of UX designers is to make things easy for users. For the most part, we’ve made a lot of progress. However, we still have a lot to do to make things easy for people who depend on assistive technology. Users who rely on screen readers to use smartphones have learned that almost anything they want to accomplish will have an enormous interaction cost. Unlike many sighted users who quickly give up or move on when things take extra effort, many screen-reader users have accepted that everything inevitably takes extra time and patience.
Learning to use a smartphone is much harder for individuals who are blind or have low vision than for individuals with full vision. There are three major reasons for this extra difficulty:
Technology assumes users can see. Humans rely on sight more than any other sense. Visual information is extremely rich and detailed compared to what people hear, smell, taste, or touch. Thus, it’s no surprise that most technology has been designed around sight. The largest drain on a smartphone battery is the enormous, bright screen — not the speakers or the haptic feedback.
Many visual impairments come later in life when it’s harder to learn new skills. While there are countless people who have been blind since birth, many screen-reader users have lost sight later in life — some suddenly and some gradually. The longer a person lives, the more their brain plasticity decreases, making it more and more difficult to learn new skills.
Users with visual impairments must teach themselves. In most cases, screen-reader users must figure out how it all works on their own. They do not have the benefit of constantly observing how others use technology — as many young, sighted users do. One elderly study participant who is gradually losing his sight said, “you get [a smartphone] and you really have to […] train yourself, and sometimes it takes forever unless you’re in a situation where you can sit down and have training.”
The goal of this article is to shed light on how screen-reader users accomplish some of the most basic actions on smartphones: typing and navigating. While perfect solutions to these challenges do not yet exist, the first step in the design-thinking process is to empathize. We hope to help you deepen your empathy with these insights.
Our Research
To better understand the experience of using mobile screen readers, we recently conducted qualitative usability tests and contextual-inquiry sessions with participants who had varying levels of sight — including some who were fully blind. We visited these people in their homes or personal offices and gave them tasks to perform on their own mobile devices. The following are some insights that emerged from those sessions.
Typing on a Touchscreen Is Hard
To stay oriented in the world, users with visual impairments rely heavily on tactile information gathered through their hands. In fact, one of our study participants shared that he collects 3D models of buildings as he travels in order to feel them and get a sense of their structure. Multiple users in our study mentioned that they miss the days when smartphones had physical buttons for this reason.
This reliance on touch makes a computer the preferred tool for many online tasks because physical keyboards are easier to use than smartphones’ small touchscreen keyboards. Touchscreen keyboards are difficult to use because they lack reference points (other than the edges of the screen) to help users locate the keys they need. In contrast, on physical keyboards users can feel the individual keys as separate buttons which serve as landmarks to keep them oriented. Users are also able to leave their hands in one place and use specific fingers for designated keys. This is why any user (sighted or not) can learn to type very quickly, without looking, on a physical keyboard.
On a touchscreen, the only way for screen-reader users to know whether they have tapped the correct key is by hearing the letter spoken out loud by the screen reader.
 
One of our participants demonstrates what it is like to type on a digital keyboard after years of practice. (The participant graciously gave his consent for us to include his name in this recording.)
However, the affordances of mobile devices are still valuable enough that users and designers are constantly working to overcome the challenge of typing. Our study participants relied on four main methods for inputting information that allowed them to avoid the touchscreen keyboard:
Dictation
Phone-based voice assistants like Siri and Google Assistant
A braille display
A digital braille keyboard
Each method is described below, ordered by user preference.
Dictation
Screen-reader users almost always prefer dictating (speaking) information over typing it. This is because dictation has a significantly lower interaction cost than any of the available keyboard options, and it does not require braille literacy.
Users most often go straight for the dictation button on a touchscreen keyboard, before even attempting to type. (Android, left and iOS, right)
The greatest challenge of dictation is that it still results in many mistakes. Moreover, it is difficult to check the transcript for accuracy and to correct mistakes. Even if a user moves the screen-reader focus back to the drafted text to listen and check for accuracy, it is hard to precisely place the cursor near a mistake and edit the text. Unless there is a lot of text already drafted, users will just delete everything and start over instead of attempting to edit the text.
Additionally, many languages contain homophones (words that sound the same but are spelled differently and have different meanings — for example: right, rite, wright, and write). Simply having the screen reader read back what was typed is not enough for a user to recognize when the wrong word has been transcribed. Unfortunately, homophones can lead to unnoticed typos in texts, emails, or internet searches — even when screen-reader users are double-checking. As a result, screen-reader users may have to deal with inaccurate search results, and others may unfairly judge them as illiterate.
This participant who is fully blind meant to type app into the search field to look for the App Store on his device. He accidentally typed only pp without realizing it and had to read 4 separate search results to figure out that he must have had a typo.
However, users in our study were willing to put up with the challenges of dictation to avoid the difficult process of typing on a touchscreen keyboard, which often took much longer and resulted in as many mistakes. This was especially true when they were writing longer passages such as an email.
Voice Assistants
Dictation is available only when the screen reader’s focus is positioned in an open-text field and the keyboard appears. In other situations, users often turn to a voice assistant like Siri or Google Assistant to complete basic tasks that involve typing so they can avoid opening and navigating through an app or website. For example, screen-reader users are likely to ask a voice assistant to send a text message to a contact (Hey Siri, send a message to [name of contact]) rather than opening the Messages app, finding the relevant conversation, getting the screen reader to focus in the open-text field, and hitting the dictation button.
Screen-reader users prefer to use a voice assistant like Google Assistant (left) or Siri (right) to perform typing tasks such as sending a text or email.
When we asked study participants to send us an email during our sessions, several revealed that they had already created a contact with our information after only one email exchange. As one user stated, “Siri comes in real handy when you’re doing things with text and email, as long as you have [the person] in your contacts. I guess if you don’t, you’ve got a problem. You gotta put it in.” When questioned about having many contacts he responded, “[they’re] easy to delete.”
Braille Display
A braille display is a physical device that acts both as an input and an output channel for a device. Modern braille displays generally connect to computers, smartphones, or tablets via Bluetooth. These devices allow users to interact with an interface without a keyboard or mouse, by controlling the focus of their screen reader with physical buttons. Users can type in Braille using the 8 physical braille keys. These keys give them more precise control than the touchscreen keyboard or dictation, because of the physical reference points. The braille display acts also as an output channel: as the screen reader’s focus moves across the screen, the braille display ‘translates’ the words on the screen into the braille alphabet and presents them to the user by controlling a set of mechanical braille pins.
One participant’s Bluetooth braille display
Controlling the device and typing on the braille display with the 8 buttons
Reading on the braille display with the mechanical pins
Braille displays give screen-reader users a way to silently interact with devices: users can turn off the audible screen-reader output and read what it is telling them with their fingers on the braille pins. Even though these devices are most commonly used with computers, they can also control mobile devices such as smartphones, as multiple participants in our study demonstrated. Users find them particularly helpful when they are in quiet or professional environments, or when they need to do a lot of typing.
 
Silent example of one participant in our study navigating his smartphone by swiping and reading the output of the screen reader on the white braille pins.
However, a braille display, which is about the same size as a small Bluetooth computer keyboard, is inconvenient to carry around. This is another reason why screen-reader users rely so heavily on dictation for typing on a mobile device.
Additionally, because braille displays can easily cost more than a smartphone or computer, not all users who might like to use one will own one. Moreover, many blind or low-vision individuals are not familiar with the braille alphabet. Designers should not assume that users will rely on these devices and should create designs that are easy to use without them.
Digital Braille Keyboard
Users who can type in Braille can also enable the on-screen braille keyboard on their smartphones. This keyboard consists of numbered dots that mimic the buttons on a physical braille display. Users are most likely to utilize this input method when they want to type quickly and be more precise than dictation will allow, but still want to avoid the traditional touchscreen keyboard.
Upright mode. The user holds the phone with two hands, with the screen facing away from their body. They curl their fingers so the tips of three fingers from one hand rest on the dots on one side and three fingers from the other hand rest on the dots on the other side (Android TalkBack).
Upright mode on iOS
Tabletop mode on Android. The user lays the phone in front of them with the screen facing up and, like in the upright mode, places three fingers from one hand on the dots on one side and three fingers from the other hand on the dots on the other side.
Tabletop mode on iOS
 
One study participant using the tabletop mode and upright mode while typing on a digital braille keyboard
Users can rely on their spatial memory while using a physical or digital braille keyboard because there is a designated finger for each button. The user does not need to be very precise because the tap will register if the finger lands close to the button. The screen reader announces which letters or symbols have been typed as the user presses combinations of the on-screen keys; this is the only way for users can know in real time whether they have spelled something correctly. However, this method of typing takes over the entire screen and requires the user to use both hands and completely change how they hold the device, which is a lot of work. Hence, another reason why dictation is the preferred method for most text input.
Commonly Used Gestures for Controlling the Screen Reader on a Mobile Device
It’s easier to use a screen reader with a computer than with a smartphone — as many of the participants in our study acknowledged. While the mouse is mainly useless when using a screen reader, a physical keyboard gives the user a lot more power than a touchscreen does. Keyboards afford hundreds of custom commands (i.e., accelerators), that enable users to complete direct actions without having to navigate through an interface to find the page element corresponding to that action. In many cases, a keyboard allows screen-reader users to break out of the suffocating sequence of the code to directly access what they want. For example, a user could use the custom keyboard command associated with sending an email rather than having to navigate through the interface to find the Send button.
However, on a touchscreen device users must use gestures to control their screen readers. While screen readers make use of many unique gestures, there are far fewer unique combinations than there are ways to combine keys on a physical keyboard for direct commands. Unfortunately, the many touchscreen gestures can be hard to remember — particularly for new screen-reader users. Additionally, screen-reader gestures do not have any specific signifiers and are not related in any way to the actions they stand for.
Various operating systems provide comprehensive documentation of the gestures available to control screen readers on mobile devices (for example, VoiceOver on iOS and TalkBack on Android), and many allow for some limited customizations. Here we present some insight into which actions users found most useful, and, in some cases, the gestures that trigger these actions. Familiarity with this vocabulary of gestures can help you anticipate some of the expectations screen-reader users bring with them when they open your website or app on their phones.
Swiping Left and Right
Swiping is the most basic way in which users explore designs. Once the screen reader’s focus lands on something that has a strong information scent, the user can double-tap to select it. Swiping perfectly embodies the sequential nature of a screen reader. Any design can become more accessible if the designers ensure that what users come across as they swipe is clear and that the sequence in which elements are read makes sense.
Dragging
Dragging a finger across the screen will cause the screen reader to announce everything along the finger’s path. When users have an idea of where something is on the screen, they sometimes drag their finger in that direction.
Dragging breaks the sequential organization in the code and gives users direct access. Moreover, with dragging, the size and the location of the various page elements along the dragging path matter (as predicted by Fitts’s law), with bigger and closer elements being easier to acquire. (In contrast, when swiping sequentially through the sequence of the code, the size and the location of a page element do not make any difference for a screen-reader user.)
We saw all kinds of users drag rather than swipe:
Novice screen-reader users who were overwhelmed by the task of swiping through many page elements
Partially sighted users who had a vague sense of what was displayed on the screen
Expert screen-reader users who were looking for something specific that they knew was there but couldn’t find by swiping
Tapping Directly
When users know where something is located on the screen (because they have accessed it many times in the past), they might tap on that part of the screen to directly access it. (Note that the tapping action will cause the screen reader to read that element rather than to select it. To select it, users would have to double-tap.) However, simply experiencing the page through a screen reader will not teach the user where something is on the screen. Screen-reader users might learn where something is and start tapping it directly in the following cases:
A sighted person has shown them where to tap.
An item (such as a search bar) comes up first in the sequence and they guess it is located near the top left of the screen.
The user has customized the location of an element (such as apps on the phone homescreen).
The user has learned where something is by dragging or just tapping around.
Continuous Progression
Users often have the screen reader continuously read all page elements so that they don’t need to constantly swipe to move through a page. This is most common on content-heavy pages (such as articles), or when the user wants to explore everything on a new page. When users enable this continuous flow, they can sit back and simply listen for a while.
Continuous progression is a slower exploration method than swiping through every page element because swiping enables the user to skip ahead if something seems unrelated to their task. Most screen readers allow users to begin this continuous progression from the current location of the focus or from the beginning (top left) of a page. Users can stop this continuous progression at any time by tapping with two fingers.
Stopping the Screen Reader
Users can stop the screen reader at any time. This is very important if they suddenly need to silence their phone. During our sessions, users frequently silenced the screen reader to focus on the conversation they were having with another person. This is particularly important for the think-aloud method employed in usability-testing sessions because the participant doesn’t have to compete with the screen reader to share insights.
When users cut the screen reader off mid-sentence it is often because they have shifted their attention to something else. In some cases, they will purposefully pick things up right where they left off. But in other cases, they will have forgotten exactly what they were hearing by the time they come back.
Screen-Reader Controls
Because the number of convenient gestures (everything from swiping with one finger to triple-tapping with four fingers) is limited, screen readers allow users to re-purpose these common gestures as well as other characteristics (such as speaking speed, or whether they can swipe between headings or links) on the go by changing the mode the screen reader is in. These modes can be accessed through a menu that is usually available at any time when the user is using a screen reader. iOS calls this menu the rotor and displays it when the user has made a twisting motion with two fingers anywhere on the screen at any time. Android calls it the reading controls, activated by swiping down and right in an “L” shape.
Conclusion
Typing is a difficult task for screen-reader users, so they generally prefer to dictate whenever possible. Designers should not assume that screen-reader users will make use of the same touchscreen keyboard that sighted users rely on — that is, in fact, their least favorite input method. Controlling a screen reader on a smartphone is more difficult than on a computer, but users still learn to do it because mobile devices offer so many benefits in their lives.
Reference
Sheffield, R. M., D’Andrea, F. M., Morash, V., & Chatfield, S. (2022). How Many Braille Readers? Policy, Politics, and Perception. Journal of Visual Impairment & Blindness, 116(1), 14-25.

https://www.nngroup.com/videos/make-it-easy-ux-slogan-8/

https://www.nngroup.com/articles/interaction-cost-definition/

https://en.wikipedia.org/wiki/Neuroplasticity

https://www.nngroup.com/articles/millennials-digital-natives/

https://www.nngroup.com/articles/design-thinking/

https://www.nngroup.com/videos/how-practice-empathy/

https://www.nngroup.com/articles/usability-testing-101/

https://www.nngroup.com/articles/usability-testing-101/

https://www.nngroup.com/articles/contextual-inquiry/

https://www.nngroup.com/articles/iphone-x/

https://www.nngroup.com/articles/mobile-ux/

https://www.nngroup.com/articles/voice-assistant-attitudes/

https://www.nngroup.com/videos/mouse-king/

https://www.nngroup.com/articles/ui-accelerators/

https://www.nngroup.com/articles/direct-vs-sequential-access/

https://support.apple.com/en-my/guide/iphone/iph3e2e2281/ios

https://support.google.com/accessibility/android/answer/6151827?hl=en&ref_topic=10601570

https://www.nngroup.com/videos/jakobs-law-internet-ux/

https://www.nngroup.com/articles/information-scent/

https://www.nngroup.com/articles/fitts-law/

https://www.nngroup.com/articles/customization-of-uis-and-products/

https://www.nngroup.com/articles/modes/

AI & Accessibility

Insights: AI and Accessibility 
Introduction
Artificial intelligence (AI) has the potential to change how we work, socialize, shop, and access critical services like healthcare. Growth in AI is happening exponentially, and it’s faster than most of us can keep up with. It’s exciting and scary at the same time, but especially scary for some. 
The speed of new AI developments comes with the potential to accelerate the inclusion, or exclusion, of people with disabilities. The impact of AI will depend on the people who are generating and choosing data sets, writing, and testing algorithms, and building interfaces to leverage AI. 
To better understand how people with disabilities are thinking about AI and their concerns for the future of AI, Fable surveyed our community of assistive technology users. This article outlines what we learned and makes recommendations for creating more inclusive AI tools.  
AI momentum is building
Tech news has been taken over by the latest AI trends and tools. Many of us are paying close attention to new developments in the field, including the Fable community. 91% of respondents to our survey said they are following the recent advances in AI. 
A number of AI advances related to accessibility were announced on May 18th, Global Accessibility Awareness Day: 
Apple: new features for cognitive accessibility, along with Live Speech, Personal Voice, and Point and Speak in Magnifier 
Google: 5 products and features that make the digital world more accessible 
Microsoft: Global Accessibility Awareness Day – Accessibility at the heart of innovation 
The Fable community has been exploring new tools in this space. For example, 54% have used ChatGPT. Other examples of AI adoption include ElevenLabs for text to speech, Seeing AI to identify objects with their smartphone camera, and DALL·E to generate images. 

AI isn’t trusted, yet
There are concerns around the trustworthiness of AI with only 19% of respondents agreeing that existing AI is trustworthy. The reality is that AI tools have varying degrees of accuracy depending on the task. 
For example, if you ask AI image generators to create a picture of a blind woman with a guide dog, they’ll have difficulty generating a harness for the dog and will likely show the dog on a leash, which isn’t the widely used method of navigating with a guide dog. These gaps in accuracy are caused by gaps in training data sets. There aren’t a lot of photos of blind people to train AI on because photo shoots don’t commonly include disabled models.  
There’s also no way to verify the accuracy of the images generated. AI tools need to be transparent when they fill in data gaps with guesses. 

Hope for benefits in the future
On the plus side, two out of three respondents said that recent advancements in AI have had a positive effect on their life. There’s a great deal of hope around how AI can continue to positively impact accessibility and inclusion in the future. 
“I think if implemented correctly, AI could have impressively positive impacts on inclusivity and accessibility. AI could be the difference between being able to contribute in society and not being able to contribute and being able to access vital information more-readily and seamlessly than ever before.” 
– Christina M., Screen reader user 
“I think AI will have a big impact on accessibility. I think it could make difficult tasks for disabled people much easier.” 
 – Emma L., Alternative navigation user 
“It may level the playing field for people with disabilities in allowing them to participate in activities that would have otherwise been challenging or impossible.”  
 – Charmaine C., Screen reader user 
Assistive tech users need to be considered
Members of the Fable community don’t always feel that their needs are being considered as AI advances. Half of respondents that have engaged with AI technology encountered barriers or challenges when using AI due to their disability.  
“Bard, OpenAI and Bing all have the issue that new content received from them when you ask a question is not automatically read by screen reading technology.”  
– Martin C., Screen reader user 
“Some barriers I experience are AI technology not being fully customizable, or having one or more functions that require mobility, like pushing a button to power it on, lower the volume, etc.” 
 – Remon J., Screen magnification and alternative nav user 
While we don’t have exact numbers on assistive technology usage, the World Health Organization estimates 900 million people need assistive products, which includes assistive technology.1 
Representation of assistive technology users in the development of AI technology and the data sets used to train machine learning models will be critical to ensuring that AI can benefit more people. According to our survey, only 7% of respondents believe that there is adequate representation of people with disabilities in the development of AI tools. 
For example, for everyone to benefit from AI tools that use voice interactions, you need to train it with a diversity of voices. People with ALS, Down Syndrome, and people who are Deaf, and many more diverse voices need to be included. If you don’t involve the end users in the design process, the AI solutions are unlikely to be widely adopted. 

Fable testers are eager to participate in AI 
Despite these challenges, there is an eager community of assistive technology users ready to help. 87% of respondents would be willing to provide feedback to AI developers to help them improve the accessibility and inclusivity of their products.  
Not a single survey respondent felt that AI should continue to be developed without clear regulations and guidelines in place to ensure accessibility and inclusivity for people with disabilities. They believe that it is through their feedback that AI technology can become more inclusive of users of assistive technology. 
Accessibility can also lead to more innovative solutions. Understanding the barriers faced by people with disabilities can spur creativity and better solutions for everyone. Some ideas the Fable community have include:
“I have hearing problems and really struggle with phone conversations. At the same time, I don’t like asking other people to handle phone calls for me because they will never do things the same way I would. I wish I could just tell an AI what I want done, and let it handle the phone calls for me.”
– Michelle B., Alternative navigation user 
“As I have been exploring AI Services like Chat GPT, I have been amazed by its ability to understand computer code in multiple languages. I believe I could have put this to good use in my coding classes, having it assist me in figuring out what was wrong with the programs I was writing.” 
– Cullen G., Screen reader user 
“I type using an on-screen keyboard and it takes me a long time to type. It would be easier if AI was able to predict my next sentence and I could edit it from there.” 
– Emma L., Alternative navigation user 

A more inclusive way forward for AI 
“Technological progress has to be designed to support humanity’s progress and be aligned to human values. Among such values, equity and inclusion are the most central to ensure that AI is beneficial for all.” 
– Francesca Rossi, AI Ethics Global Leader, IBM 
We need to be able to discover cases of “statistical discrimination” caused when AI uses pattern matching or optimizes within a set of choices. People with disabilities and their needs can too easily be filtered out, unintentionally. 
Implementing AI equity and safety protections is top of mind for more than 350 tech executives and researchers in the industry who have signed a statement urging policymakers recognize the risks of unregulated AI. 
Our Innovation team at Fable is focused on three critical aspects of AI development. 
Inclusive Data
Organizations like Microsoft, Google, the World Economic Forum, and many others have released guidance on creating more equitable AI: 
Microsoft HAX Toolkit 
Google AI Principles 
World Economic Forum Blueprint for Equity and Inclusion in Artificial Intelligence 
In sourcing and preparing data sets, we must consider how gaps in data may change the outcomes for different groups of people. Selection bias and availability bias can have immense impact with the widespread adoption of AI tools. Consider an AI tool that reviews resumes and uses work history to determine suitability for a job. Without adequate training data on people with disabilities who tend to be underemployed (U.S. unemployment is twice as high for people with a disability as it is for people without a disability), the very inequities that a company may be trying to address through their hiring can be reinforced because of a tool.  
Confidence building through transparency
When creators are transparent about their data sources, models, and overall AI approach, it builds accountability, increases trust, and promotes participation. When individuals have insight into how an AI system functions and makes decisions, they can better understand and predict its behavior. This understanding is essential to alleviate concerns and increase confidence in using AI technology for people with disabilities. Companies developing AI recognize this and are establishing transparency standards. 
Microsoft promotes the use of Transparency Notes to share the intended uses, capabilities, and limitations of their AI platform services.  
Google has published a Data Cards Playbook — a toolkit for transparency in AI data set documentation.  
Evaluation by people with disabilities
When building AI tools, moving beyond inclusive data and transparent practices, we need to adopt a willingness to iterate based on feedback from end users. With widespread adoption of assistive technology by people with disabilities, we must include assistive technology users intentionally to evaluate AI user interfaces. 
At this stage of constant ‘new AI product’ announcements, one of the core goals of companies is to collect feedback and improve their products. People with disabilities must be able to sign up, interpret the outputs, and provide feedback or we will miss this opportunity to identify and remediate potential barriers.  
Reach out to Kate Kalcevich, Head of Accessibility Innovation at Fable to learn more about how Fable can support your AI initiatives.  
The potential of inclusive AI
With its ability to analyze vast amounts of data and perform complex tasks, AI has the power to revolutionize accessibility for individuals with disabilities. From speech recognition to image recognition, AI technologies can enhance communication, navigation, and interaction for those with hearing, vision, mobility, and other disabilities. AI tools can automate mundane tasks and provide real-time assistance, enabling greater independence and inclusion for people with disabilities.  
Right now, AI is being used in tools like Seeing AI and Be My Eyes, which help blind people interpret visual information. AI is also improving the accuracy of auto generated transcripts for people who are Deaf or hard of hearing through tools like Live Captions and Otter.ai. The time to focus on inclusion is now. 
As AI continues to advance, it is critical for developers, data scientists, and researchers to ensure that AI solutions are designed inclusively. By harnessing the power of AI along with a focus on accessibility, we can create a future where technology breaks down barriers and empowers everyone to thrive. 
 

Become a tester

https://www.apple.com/newsroom/2023/05/apple-previews-live-speech-personal-voice-and-more-new-accessibility-features/

https://blog.google/outreach-initiatives/accessibility/global-accessibility-awareness-day-google-product-update/

https://blogs.microsoft.com/on-the-issues/2023/05/18/global-accessibility-awareness-day-generative-ai/

https://beta.elevenlabs.io/speech-synthesis

https://www.microsoft.com/en-us/ai/seeing-ai

https://openai.com/product/dall-e-2

https://www.cbc.ca/news/world/artificial-intelligence-extinction-risk-1.6859118

Home

https://ai.google/responsibility/principles/

https://www.weforum.org/whitepapers/a-blueprint-for-equity-and-inclusion-in-artificial-intelligence/

https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6

https://sites.research.google/datacardsplaybook/

https://meetings.hubspot.com/kate341

https://support.microsoft.com/en-us/windows/use-live-captions-to-better-understand-audio-b52da59c-14b8-4031-aeeb-f6a47e6055df

https://otter.ai

The iLet Bionic Pancreas and “Diabetes Without Numbers”

There’s a new closed-loop insulin pump system available in the United States, one that is so simple to operate that its manufacturer is calling it a “bionic pancreas.” It’s the iLet, from Beta Bionics.
Compared to other looping systems, the iLet is radically simplified. There’s no carb counting, no basal rates, no correction factors — it delivers insulin automatically with almost no input at all from the user. It is, its creator Ed Damiano proclaims, “the first device that lets you manage your diabetes without numbers.”
Damiano, the founder of Beta Bionics, is a biomechanical engineer that years ago swore to build a better insulin pump when his infant son was diagnosed with type 1 diabetes. He spoke to Diabetes Daily about his invention.
Set It and Forget It
The iLet system begins by asking for a single input: your body weight. It does nearly everything else by itself, using lifelong machine learning.
The manual states: “You do not need to know your basal insulin rates, correction factors, or carbohydrate-to-insulin ratios to use the iLet.” In fact, if you do know your factors and ratios, it doesn’t matter, because you have no ability to input any of these numbers. There is no manual mode. The iLet takes all of these decisions out of your hands.
“The iLet determines 100 percent of every therapeutic insulin dose,” Damiano says. “That has never happened before. It’s not a hybrid closed loop system. It no world could you call it a ‘hybrid’ system.”
That makes the iLet a potentially ideal option for people that have never before used an insulin pump, or for people that struggle with all the math that diabetes requires. If you’ve never felt comfortable changing your own insulin ratios without the guidance of a doctor or diabetes educator, this might be your pump system because it does all the math and hides all the numbers.
Damiano states that it only takes about two days for the iLet to learn all of these factors. “It reaches its steady state within about 48 hours, on average.” It does this simply by guessing how much insulin you need and paying very close attention to how your body responds. Very soon, the software more or less knows how much basal insulin you need, how much insulin to use for meals, and how to adjust insulin delivery rates in response to low and high blood sugars.
The algorithm adapts adjusts so quickly that Damiano is confident that it can handle short-term changes in insulin sensitivity, including those associated with the menstrual cycle or illness.
No Carb Counting
Perhaps the most eye-popping feature of the iLet is a mealtime dosing system that doesn’t use carbohydrate counts.
Damiano calls carb-counting a “fiction.” He thinks that other insulin dosing systems that rely on carbohydrate counts are “participating in a shared fantasy.”
“People aren’t good at carb counting. Humans, as a species, can do amazing things, but they can’t count carbs, and we shouldn’t pretend they can.”
Instead, the iLet asks you to ballpark the carbohydrate content of your meal. For each meal, you can select from three different settings: “usual for me,” “more,” and “less.”
The iLet quickly learns your habits, and keeps breakfast, lunch, and dinner separate. If you usually eat a bowl of oatmeal for breakfast, eat your oatmeal and select a “usual for me” bolus. It should only take a few breakfasts for the iLet to learn how much insulin you need. Whether your regular lunch is a high-carb sandwich or a low-carb salad, the algorithm will learn and adjust.
“It’s all relative to you. Are you having a usual amount of carbs for you for that meal type, more, or less? That’s it.”
You can pre-bolus up to 15 minutes ahead of eating, but the manufacturer recommends that you use the “announce carbs” feature when the food actually arrives. The pump delivers the extra insulin at that instant, giving its best guess for three-quarters of the insulin that you’ll need to keep your blood sugar steady. It will react to highs and lows for the next several hours.
The system is so adaptive that if you forget to bolus, it would be a mistake to deliver a bolus after finishing your food. At that point, the corrections algorithm has already taken control of the situation, and your pump is already administering extra insulin to account for your rising blood sugar.
Blood Sugar Targets and Results
The iLet allows the user to choose from three blood sugar targets:
Higher
Usual
Lower
iLet glucose targets
That’s it. No numbers.
Of course, there are real numbers in the system. The “usual” setting, which most people use most of the time, sets a target blood sugar of 120 mg/dL. Changing the target to “lower” or “higher” changes the target by ± 10 mg/dL.
Damiano insists that the precise numbers are an unimportant distraction: “They mean absolutely nothing to you.” Setting a blood sugar target of 120 mg/dL will not allow you to achieve a blood sugar average of 120 mg/dL, because the insulin delivery algorithm is “much more punishing of blood sugar measurements below target than above target, as it should be. The iLet does everything it can to prevent hypoglycemia.” Other insulin dosing systems work the same way.
What matters instead are the results. In the pivotal trial that led to its approval, the iLet helped the average user achieve an A1C of 7.3 percent, approaching the standard recommendation of less than 7 percent for adults with type 1 diabetes.
This was an improvement for most trial participants, who enjoyed an average A1C drop of 0.5 percentage points and spent an additional 11 percent of their average day in range (+2.6 hours daily). There was no significant increase in the risk of hypoglycemia.
Damiano adds that frequent use of the “lower” and “higher” glucose targets can toggle these results up and down. Users could elect to stay on the “lower” blood sugar target continuously.
“On the iLet, if you switched from the lower to higher targets, you’d see about 15 mg/dL difference in mean glucose. That’s half a percent of A1C, that’s a huge difference!”
“Now that is useful information. That’s what you wanna know.”
The iLet is Not for Everyone
The iLet bionic pancreas is so radically streamlined that it may be a poor choice for the minority of people with type 1 diabetes that are already meeting or exceeding standard blood sugar targets. If, while reading this article, you find yourself wondering things like, “how do I bolus for protein?” or “how do I reduce my basal rate for exercise?”, you’re probably not the right customer for the iLet.
The simple answer is, you cannot do those things. There is no setting that lets users employ a more detailed management strategy. You’re completely in the hands of the system, for better or for worse.
There’s at least one situation where this approach really reveals its flaws: exercise. There’s no exercise setting, no reduced basal rate, and the algorithm is unlikely to adapt quickly enough to understand why your blood sugar is plunging during a jog.
I asked Damiano: If I’m about to go for a run, what do I do? His response: “One of two things. You can take some carbs before you exercise, or you can disconnect from the iLet.” Neither is a great solution. For some people with type 1 diabetes, that loss of control is likely to be a deal-breaker.
“This is for the 80 percent of people that aren’t meeting their A1C goals,” Damiano says. If your personal goal is an A1C better than 7 percent, he says, “don’t look to the iLet for that. That’s not what it’s designed to do.”
The results bear this out. Take a look at the following numbers from the iLet’s pivotal trial:

Users that began the trial with an A1C under 7.0 percent did not improve their control while using the iLet. But look at the results of trial participants that began the experiment with an A1C over 9.0 percent. They experienced exceptional improvement — an A1C reduction of 1.23 percent and an additional 6.8 hours per day spent in range.
To be fair, it’s possible that some users with an A1C of around 7.0 percent really appreciated the reduced cognitive burden granted by the iLet system, even if their blood sugar management didn’t much improve. But some might prefer the Omnipod 5, a system that similarly uses adaptive learning to develop insulin delivery rates, but offers slightly more control, including manual and exercise modes.
The iLet “is potentially for people with A1Cs around 7.0 who want to reduce the burden of care on themselves and get similarly good glucose control. But it’s not for the person who wants an A1C of 5.0,” Damiano says.
“It helps the people that need it the most.”
Simplicity at the Doctor’s Office
One of the inevitable consequences of the iLet’s “set it and forget” tech is that it minimizes the influence of healthcare providers. You don’t need an expert to help you review your blood sugar results and pump settings; the iLet is constantly and endlessly making all the adjustments you might need.
“Other systems rely on a physician programming an insulin regimen into your pump. We just use bodyweight.”
Damiano sees this as a huge bonus, liberating healthcare providers from all the messy mathematics of diabetes and providing a system that can be understood without specialized training.
“The reason it’s so important to us is that as soon as you ask physicians to deposit a healthcare and insulin regimen, like basal rates and correction factors, primary care cannot use those devices. It immediately excludes them. They’re too complicated.”
The only choice a physician has to make is to decide which of the three vague blood sugar targets they should recommend. “And,” Damiano says, “we’d even like to remove that from their plate. Why should your doctor have to choose the target?”
During follow-up visits, healthcare providers really only have two details to look at: Is the patient using the right glucose target? And are they using the meal announcement feature?
“It’s simpler to use than any device. That’s why we think it’s for the people. It’s the insulin delivery system for the people.”
Other Details
The iLet is a regular tubed pump, with an infusion set base that sticks to your body and a tiny cannula sits under the skin.
You load your iLet pump with glass cartridges that you have filled with NovoLog or Humalog fast-acting insulin. Beta Bionics is currently developing a line of pre-filled Fiasp cartridges that you can pop directly into the pump. Fiasp is an ultra-rapid insulin that could result in even better blood sugar numbers after the algorithm gets trained on it.
The pump is recharged with an inductive charging pad. The manual recommends charging for 15 minutes per day.
The iLet is rated waterproof to the IPX8 standard, which means it should be safe in a swimming pool.
The pump system requires a Dexcom G6 continuous glucose monitor (CGM) to work. Beta Bionics is working with Dexcom on integrating the G7.
If your CGM goes offline, the pump can continue operating, using what it remembers about your insulin requirements, for up to 72 hours. The pump will prompt you for a fingerstick blood sugar measurement every four hours.
iLet users would be wise to be trained and ready to switch to another method of insulin delivery, such as multiple daily injections, in case they lose access to their CGM for any reason.
Takeaways
The iLet is only the fourth closed-loop insulin delivery system approved for sale in the United States. It is currently approved for people with type 1 diabetes over the age of 6.
This system introduces a radically simplified approach to diabetes management, eliminating the need for precise carb counting, basal rates, and correction factors. The iLet may not be a great choice for people that like to keep tight control over their insulin usage, but it could prove to be a massive help for individuals who aren’t meeting their A1C goals.
The system is for sale now, though initially many clinics and insurers may be unfamiliar with it. If you’re interested, click “Get Started” on Beta Bionics’ website.

The iLet Bionic Pancreas and ‘Diabetes Without Numbers’

iLet Bionic Pancreas

https://www.nejm.org/doi/full/10.1056/NEJMoa2205225

Diabetes and Menstruation: Insulin Requirements Throughout the Menstrual Cycle – Diabetes Daily

How to Plan for Sick Days with Diabetes – Diabetes Daily

https://diabetesjournals.org/care/article/46/Supplement_1/S97/148053/6-Glycemic-Targets-Standards-of-Care-in-Diabetes

How to Calculate Bolus Insulin Dosing for Protein and Fat

Mastering Exercise With Type 1 Diabetes

How the Omnipod 5 – The Tubeless Closed-Loop Insulin Pump – Plans to Simplify Your Life

https://www.cnet.com/tech/mobile/is-my-phone-waterproof-ip68-ipx8-ip-ratings-explained/

https://www.fda.gov/news-events/press-announcements/fda-clears-new-insulin-pump-and-algorithm-based-software-support-enhanced-automatic-insulin-delivery#main-content

Home

25 Smartphone Accessibility Settings You Need to Know About

These accessibility settings will change the way you use your phone, whether you have a disability or simply want to make your life easier.

We rely on our smartphones for just about everything, but most of us know only a fraction of what they can do. Thanks to accessibility settings on iPhones and Androids—such as screen readers, voice-to-text dictation, and more—our phones can make daily tasks more convenient, especially for people with disabilities. “Using the accessibility features built into most smartphones improves the lives of people with disabilities in exactly the same ways smartphones have improved the everyday lives of everyone,” says Matt Hackert, a nonvisual access technology specialist at the National Federation of the Blind.
These settings aren’t only for people with disabilities, though. Features like captions and sound amplification can help all of us more easily navigate our phones and the world around us. “Anyone can play with these on their iPhone to check out the possibilities,” says Ashley Shew, PhD, a disability tech expert and assistant professor at Virginia Tech.
Whether you live with a disability or just want to use your smartphone more effectively, you’ll want to find out what these accessibility settings do and how to use them on iPhones and Androids. Once you’re a pro, learn other hidden smartphone features you never knew about, such as how to hide text messages on an iPhone, how to turn off Google Assistant on your Android phone, and how to view (and delete) your iPhone’s call history.

rd.com, Getty Images
iPhone accessibility settings
Apple launched the first iPhone accessibility features in 2009, and the brand has continued to expand its offerings ever since. Today, “Apple builds accessibility into the design process for everything we make so that people can access technology on their terms,” says Sarah Herrlinger, senior director of Apple’s Global Accessibility Policy & Initiatives. Experts recommend checking out the following accessibility settings on iPhones, all of which can be found under Settings > Accessibility. Make sure you’re looped in on these secrets Apple insiders know about iPhones, too.
VoiceOver
VoiceOver uses artificial intelligence to provide audible descriptions of items on your screen, from images to battery level to who is calling you. According to Herrlinger, VoiceOver is the world’s most popular mobile screen reader, with 70 percent of blind people using it every day. To turn VoiceOver on or off, go to Settings > Accessibility > VoiceOver and toggle the switch. You can also say, “Hey Siri, turn on VoiceOver” or “Turn off VoiceOver.”

rd.com, Getty Images
Zoom
Want to learn how to zoom in and out on any iPhone screen to see text or images better? Get familiar with Apple’s Zoom feature. “Zoom magnifies the content on the screen and has many options to configure contrast, invert colors, and highlight focus,” says Hackert. Go to Settings > Accessibility > Zoom, then turn on Zoom. From there, you can activate the feature any time you need it by double-tapping the screen with three fingers. If you want to see more of the screen, move the Zoom lens by dragging the screen with three fingers. You can turn off Zoom by double-tapping the screen with three fingers again.
Magnifier
No need to carry around a pair of reading glasses—you can use your iPhone’s camera instead. “[As] a glasses wearer myself, [Magnifier] enables me to use my iPhone like a magnifying glass to read small print on things like medicine bottles and printed materials,” Herrlinger says. To try this feature, go to Settings > Accessibility > Magnifier and toggle the switch to the “on” position. Then open the app on the home screen and point your iPhone’s camera at the text or object you want to magnify. You can zoom in or out with the zoom control slider, or adjust the image’s appearance using the Brightness, Contrast, Color filters, and Flashlight buttons below.

rd.com, Getty Images
Text Size
Customizing the text settings on iPhones can make the screen easier to see, especially for people with vision challenges. For how to make text bigger on iPhones, go to Settings > Accessibility > Display & Text Size. There, you can turn on the Larger Accessibility Sizes setting and adjust the size of the text using the Font Size slider. Once you do that, apps like Settings, Calendar, Contacts, Mail, Messages, and Notes will use your preferred text size rather than the default size. Turns out there are many more hidden iPhone hacks most of us don’t know about.
Text Color and Readability
iPhone also offers other text customization settings under its Display & Text Size feature, including inverting the display colors, increasing the contrast between the text and background, reducing the intensity of bright colors, and applying color filters. If you have color blindness, you can turn on the Button Shapes setting to underline hyperlinked text or the On/Off Labels setting to show numbers on sliders instead of colors. Like the text size feature, these settings are compatible with all of Apple’s apps, including Settings, Mail, and Messages.
Subtitles and Captions
When watching videos on your iPhone, you can turn on subtitles and closed captions through iPhone’s accessibility settings. Go to Settings > Accessibility > Subtitles & Captioning and turn on Closed Captions + SDH for those who are the deaf or hard of hearing. You can even customize the subtitles display by tapping Style, then choosing an existing caption style or creating a new style with your preferred font, size, and color; opacity; and more. Make sure you also know these helpful iPhone and iPad keyboard shortcuts.
Headphone Accommodations
Want to know how to turn up the volume on your iPhone beyond the usual audio settings? Certain Apple and Beats headphones can help you amplify and adjust the sounds in the music, movies, phone calls, and podcasts you listen to on your device. Go to Settings > Accessibility > Audio/Visual > Headphone Accommodations, then turn on Headphone Accommodations. Tap Custom Audio Setup, then follow the instructions to customize the audio settings on your phone. Once you’re finished, you can test it out by tapping Play Sample.
Switch Control
Switch Control helps iPhone users with limited mobility perform actions like texting and opening apps by clicking a switch instead of tapping. A switch can be a keyboard key, mouse button, trackpad button, joystick, or adaptive device. For users who are nonverbal or nonspeaking, Apple recently launched Sound Actions for Switch Control, which “replaces physical buttons and switches with mouth sounds, like a click, pop, or ‘ee’ sound,” Herrlinger says. Add a new switch under Settings > Accessibility > Switch Control > Switches, then tap Add New Switch and choose a source. Then you can turn on Switch Control by going to Settings > Accessibility > Switch Control and turning the setting on or off.
People Detection
With People Detection, your iPhone scans the area around you, recognizes when other people are close by, and shares this information with vibrations or sounds. “This feature gives members of the blind and low vision community another tool to make the world more accessible,” Herrlinger says. It’s also a helpful reminder for social distancing during the coronavirus pandemic. Turn it on by opening the Magnifier app, tapping the Settings icon, tapping the “+” icon beside People Detection, and then choosing People Detection. From there, you can customize the measurement units, distance increments, and notification type. Don’t miss these other iPhone tricks that can make things so much easier.
Live Listen
If you use hearing devices, your iPhone’s Live Listen setting can help you hear conversations in loud places. Just connect your hearing devices to your iPhone and place your device close to the people who are speaking to boost the volume of their voices. To turn this feature on, go to Settings > Accessibility, then select Hearing Devices. Tap the name of your hearing device under MFi Hearing Devices, then tap Start Live Listen and place the phone in front of the person you want to hear. Turn it off again by going back to the Hearing Devices menu, tapping the name of your hearing device under MFi Hearing Devices, and then tapping End Live Listen.

rd.com, Getty Images
Dictation
Dictation is a voice-to-text feature built into all iPhones. By allowing users to write and punctuate text with just their voices, it provides a hands-free (and efficient!) way to send text messages, emails, and other notes. “Many people, regardless of whether they are blind or not, find use for dictation,” Hackert says. To use Dictation, just open the keyboard in the app you want to use and tap the microphone button. Begin speaking to make the text appear on the screen. You can also insert periods or exclamation points by saying the punctuation you want to add. When you are finished with your message, tap the keyboard icon at the bottom of the screen.
Sound Detection
Turning on the Sound Detection feature will allow your iPhone to listen for certain sounds like a doorbell or siren. If it detects those sounds, it will alert you by flashing and vibrating. To set up Sound Recognition, go to Settings > Accessibility > Sound Recognition, then turn on Sound Recognition. Tap Sounds and choose the sounds you want your iPhone to recognize.

rd.com, Getty Images
Android accessibility settings
Accessibility settings on Android phones have boomed in recent years, according to Angana Ghosh, an Android group project manager. “Google’s mission is to make the world’s information accessible to everyone,” she says. “Over the years, we have launched new features, and we continue to improve those features by listening to user feedback and working directly with communities.” Hackert agrees: “Google lagged significantly behind Apple in the quality and usefulness of its accessibility technology [early on], but in more recent years, Google has really made strong advances, to the point where they are easily on par with Apple,” he says. Here are a few accessibility settings that Android users can try, along with these Android hacks you need to know.
What is Android Accessibility Suite?
The Android Accessibility Suite offers a wide range of accessibility settings to help the visually impaired navigate their devices. “Accessibility is core to the Android user experience, and we’re passionate about making smartphones useful for everyone, including people with disabilities,” Ghosh says. Found on nearly every version of Android, the Accessibility Suite can be activated through the Settings menu, where you can turn on features like a gesture-based screen reader and switch access. Bet you never knew about these hidden Android features either.
TalkBack
Like iPhone’s VoiceControl feature, the TalkBack screen reader on Androids will give audible descriptions of the text and images on your screen. You can activate it by going to Settings > Accessibility > TalkBack. Toggle the Use TalkBack feature on or off, then tap OK. Bonus: Users with blindness or low vision can also use Android’s TalkBack Braille Keyboard to add 6-dot braille to their keyboards—no extra hardware required. To turn on the keyboard’s braille mode, go to Accessibility > TalkBack > Settings > Braille Keyboard > Layout. This feature was designed by a low-vision Googler in Australia, according to Ghosh.
Action Blocks
Action Blocks was one of Android’s first accessibility settings. It offers customizable buttons on the Android home screen for routine actions like placing calls or controlling the lights, and it was “designed to make it easier for caregivers and people with cognitive disabilities and age-related cognitive conditions to fully access and perform tasks on their phone,” according to Ghosh. Create your own Action Blocks by downloading the Action Blocks app from the Google Play Store, then opening the app and tapping on Create Action Block. From there, you can choose one of the common actions from the list, such as Make Phone Call, and label it with an image, name, or both. Once you’re done, select Save Action Block.
Display and Font Size
If a visual impairment makes it hard to see your device’s screen, you can adjust the size and display to see items on your screen more clearly. To change the font size of text on your screen, go to Settings > Accessibility > Font Size and move the slider up or down. Make the images on your screen bigger by going to Settings > Accessibility > Display Size and adjusting the slider. Under Settings > Accessibility, you can also choose to turn on High Contrast Text, Dark Theme, Color Inversion, or Color Correction to make everything on the screen more visible.
Magnification
Still struggling to make out items on your Android phone’s screen? You can temporarily zoom or magnify your screen using Android’s Magnification tool. To turn on this feature, open the Settings app and tap Accessibility > Magnification > Magnification Shortcut. Now, when you need to magnify your screen, just tap the Accessibility button and tap anywhere on the screen. Drag two fingers to move around the screen, or pinch with two fingers to adjust the zoom. When you’re done, tap the Accessibility button again.
Lookout
Using an Android device’s camera and sensors, Lookout can help people with blindness or low vision learn more about their surroundings. The feature relies on computer vision to recognize an object or text, then describes it to the user. Just install the Lookout app on Google Play, then open it by saying, “OK, Google, start Lookout,” or by selecting Lookout in the Apps section. After giving the app permission to access your camera, hold your device with your camera facing outward. Your device will now be able to read text, documents, and food labels, describe your surroundings, and even recognize currency.
Voice Access
With Voice Access, users can provide spoken commands to do everything from opening apps to typing messages to placing a call. “This feature can be particularly helpful to people with dexterity impairments, which may make it difficult to touch a phone screen,” Ghosh says. After installing the app from Google’s app store, you can turn on this setting by going to Settings > Accessibility > Voice Access and tapping Use Voice Access. From there, start Voice Access by opening the Voice Access app or saying, “Hey, Google, Voice Access.” In case you were wondering, here’s how Alexa can help in emergencies.
Time to Take Action
If you have ADHD, “chemobrain,” or other cognitive disabilities, the Time to Take Action feature on Androids can be a helpful reminder tool. This setting keeps temporary alerts like calendar notifications, text messages, and more on your screen for a longer duration. After receiving intense chemotherapy eight years ago, Shew still struggles with memory loss and relies on her phone to remember the things that she can’t. “I have my alarms ring to remind me of a lot of things, like picking kids up from school,” she says. “Some people [with cognitive disabilities] use phone reminders and notifications from calendars, too.” You can adjust how long these temporary alerts and notifications stay on your screen by going to Settings > Accessibility > Time to Take Action (Accessibility Timeout) and choosing your preferred timeout length.

rd.com, Getty Images
Voice Input
Voice Input is a voice-to-text feature on Android that allows users with physical or visual impairments to type text messages, emails, and other notes by saying the words out loud. Just launch any app that uses text, like Email or Messages, and tap in the text field to make the on-screen keyboard appear. Then tap the microphone icon and begin saying your message. When you’re finished, tap the microphone icon again, and then hit Send or Save. These smartphone keyboard shortcuts will make texting faster, too.
Sound Amplifier
Launched in 2019, the Sound Amplifier app connects with your headphones or hearing aids to boost and filter the sounds nearby or on your Android phone. “[The app] aims to help the deaf and hard of hearing community by providing an additional option for absorbing sound in the world around you, whether that’s turning up the sound of the television or setting your phone closer to a professor in class so you can hear the lecture with more clarity,” Ghosh says. Using Sound Amplifier is simple: Just download the app from the Google Play Store, then connect your headphones to your Android device, open the Sound Amplifier app, and follow the on-screen instructions.
Live Transcribe
Android’s Live Transcribe app provides real-time speech-to-text captions in more than 80 languages, along with more than 30 common sounds like applause or laughter, for people who are deaf or hard of hearing. Android also offers a Live Caption feature that automatically captions the videos, podcasts, phone and video calls, and audio messages played on your device. To use Live Transcribe, download the app on Google Play, then open the app and hold your device near the person or sound to begin transcribing. Live Caption can be found under Settings > Sound > Live Caption. Toggle the switch to the “on” position to enable the feature. You can also adjust the settings to hide profanity and sound labels.
Sound Notifications
With a single tap, your phone could save a life. Android phones now offer Sound Notifications, a relatively new feature that alerts users when it hears sounds like fire alarms, doorbells, crying babies, and more. “This technology builds off of our sound detection work in Live Transcribe to provide a better picture of overall sound awareness,” Ghosh says. To activate Sound Notifications, download the Live Transcribe & Sound Notifications app, then go to Settings > Accessibility > Sound Notifications > Open Sound Notifications and tap OK.
Switch Access
Switch Access on Android devices works similarly to the same tool on iPhones. If users have limited mobility or sensory issues, Switch Access allows them to navigate their phones with a designated “switch” like a keyboard key or mouse button instead of tapping. The phone will continuously scan the items on the screen, highlighting each item until the user selects one using the switch. After connecting an external switch device or keyboard to your Android device via USB or Bluetooth, you can enable this tool by going to Settings > Language & Input > Select Keyboard. Then tap Show Virtual Keyboard (Android 7.0 or later) or Hardware (Android 6.0 or earlier).
Morse Code Keyboard
In 2018, Google partnered with developer Tania Finlayson, an expert in Morse code assistive technology, to add Morse code to Gboard keyboards on Android phones. Finlayson was born with cerebral palsy and uses Morse code to communicate in her daily life. “Developing communication tools like this is important, because for many people, it simply makes life livable,” she said in a press release. Android users can set up the Morse code keyboard on their phones by installing Gboard, then going to Settings > System > Languages & Input > Virtual Keyboard > Keyboard. Tap Languages > English, and then swipe right through the options until reaching Morse code. Select Morse code, then tap Done. Next, check out the cell phone accessories you’ll end up using every day.
Sources:

http://www.bing.com/news/apiclick.aspx?ref=FexRss&aid=&tid=64780e51fb5f417b8688528fe9f63a0d&url=https%3a%2f%2fwww.rd.com%2farticle%2faccessibility-settings%2f&c=13957416891755321085&mkt=en-us

29 Cell Phone Hacks You’ll Wonder How You Ever Lived Without

How to Hide Text Messages on an iPhone

How to Turn Off Google Assistant on Your Android Phone

How to View (and Delete) Your iPhone’s Call History

Homepage

What Apple Insiders Know About iPhones That You Don’t

64 Hidden iPhone Tips and Tricks You Never Knew About

How to Create iPhone Text Shortcuts

40 iPhone Tricks That Will Make Things So Much Easier

13 Hidden Android Hacks You Never Knew About

16 Hidden Android Features You Never Knew About

Can Alexa Call 911? How Alexa Can Help in Emergencies

9 Hidden Symbols You Never Knew You Could Text

8 Things You Should Never Do or Say to a Deaf Person

15 Cell Phone Accessories You’ll End Up Using Every Day

25 Smartphone Accessibility Settings You Need to Know About

What is ableism language and why we should avoid using it

Making AI delivery robots disability-friendly and ‘cautious pedestrians’

By Beth Rose
BBC Access All

The company behind AI robots which deliver shopping to your door has said it “constantly” talks to disabled people to ensure safety.
The knee-high machines from Starship Technologies can carry three bags across town.
They use the same pavements as pedestrians and a new panel advises on collision avoidance
Now in Wakefield, the team say “lived experience” and knowledge of disability is at the heart of its operation.
They look like freezer coolers on wheels, Lisa Johnson head of public affairs at Starship Technologies says. “It trundles along on its six little wheels and it can climb up and down the kerbs as well.”
But as a robot designed to use pavements, it could have become another frustrating obstacle for disabled people to navigate, such as abandoned bikes, e-scooters and street furniture.

But Lisa told the BBC’s Access All podcast that some safety solutions have already been put in place and the robots had been programmed to be “cautious pedestrians”.
They use obstacle avoidance technology – sensors and a camera – to track what is moving towards it, and how quickly.
“Its job is to stay out of your way,” she adds.

This video can not be played
To play this video you need to enable JavaScript in your browser.
One of the scenarios the company has focused on is what happens when a wheelchair-user and robot come across each other on a narrow path.
A similar problem made headlines in America in 2019 when a student at the University of Pittsburgh tweeted she had been trapped on a road as traffic approached because a Starship Technologies robot was blocking the only accessible entrance to the sidewalk.
At the time, she told the local radio station, 90.5 WESA: “It was really bizarre to realize that a non-sentient thing was putting me in danger and making me feel I was helpless. I think I was just laughing at it like, ‘Oh cool, this is my life right now’.”

The robots were removed for several days. And after reviewing the footage of the incident, the company released a statement saying it disputed the student had been impeded from getting on the sidewalk, but it did update its mapping system.
Lisa says, since that incident “we spent a lot of time having the robots learn what mobility devices look like” and the robots now know to get out of the way.
If it can’t get out of the way on its own, human back-up will always be nearby to step-in to assist.
One mobility aid it currently struggles to recognise is a white cane, used by blind and visually impaired people.
“Canes are really thin,” Lisa says. “And the robots don’t encounter canes very often. So we’ve got to make sure we keep having these interactions so the robots can understand what canes are.”
After more on-the-job learning, it is hoped the robots will detect a cane and make its presence known with a spoken message: “Hi, I’m a Starship robot, I’m just letting you know that I’m here.”

Steve Tyler, director of assistive technology at the charity, Leonard Cheshire, is one of those who signed up to Starship Technologies’ Disability Advisory Panel. He is blind himself.
“There are lots of opportunities, [but] there are also lots of threats,” Steve says of the rapidly-developing technology. “We need to be involved from the outset as a disability community to ensure that we drive some of what is delivered.”
One element Steve is keen to see improved is the arrival of the device at someone’s home. Currently, once you lift the lid to retrieve your shopping it plays a song of your choice.
But how would a blind person know it was there?
“You might want a signal before that happens, so you know where it is,” he advises.
Although this technology might seem futuristic, Steve says it is essential everyone is involved in the conversation around such technology, as it has the potential to quickly become the norm and impact how we all live in the future.
“These technologies not only bring accessibility closer to clients that need it, but it also has an impact on, potentially, how we develop cities and towns, how we lay out pavements, how we lay out shared spaces.”
As for the song it sings as you open its lid to retrieve your shopping, that’s also become a contentious issue, according to Lisa.
“One of our most popular songs at the moment is Baby Shark. Is that a plus or a minus? I don’t know at this point.”
You can listen to the podcast and find information and support on the Access All homepage.

https://www.bbc.co.uk/programmes/p02r6yqw

Sorting through all the AI lingo? Here’s a glossary to help

Hey, did you hear about LIMA? It’s built on the LLM LLaMA, not to be confused with LaMDA.
The language of AI is riddled with acronyms, platform names, tech slang and theories. If you’ve ever overheard a conversation about AI and thought, “What the heck is Stable Diffusion, and how is it different from ChatGPT?” but were too afraid to ask, we’ve put together an AI glossary to help navigate some of the lingo and identify the tech companies behind which tech.

AI has been moving so quickly in 2023 that this list could be obsolete before long. There will most definitely be new terms emerging over the summer, and who knows where AI will be by the fall? But for now, we hope this helps:

AI Glossary

Act as if: A prompt starter for AI chatbots that has it respond as if it is something specific (e.g: job interviewer, therapist, fictional character)

Algorithm: Instructions that a computer program follows to operate on its own

Artificial general AI (AGI): An artificial intelligence system that can learn and adapt, as opposed to its capabilities being limited to what is programmed

Alignment: A field of research that aims to make sure AI aligns with human value codes; for example, AI models may be trained to refuse to tell a user how to build a bomb or steal data

Ameca: A humanoid robot designed by UK-based Engineering Arts as a platform for developing interactive AI

Artstation: The largest online digital artist community on the internet; “Trending on Artstation” is a common prompt for creating AI art

Autonomous: A robot, vehicle or device that operates without human control

Bard: Google’s AI chatbot, powered by PaLM 2. Bard is not an acronym; the chatbot is named after William Shakespeare, the “Bard of Avalon”

Bidirectional Encoder Representations from Transformers (BERT): A Google machine learning framework for natural language processing used since 2018 for tasks such as predicting text in search

Bias: When an AI algorithm produces systemically prejudiced results due to biases in the training data.

BingGPT: Bing’s ChatGPT-based chatbot

Black Box AI: A machine learning concept where developers do not control or understand how the AI model processes information. The opposite of “Explainable AI”

Blinding: A method where certain information is intentionally withheld from an AI to make it more challenging to exploit

Boxing: A method where an AI is isolated, for example, by not connecting it to the internet, to prevent it from potentially causing harm outside of its developers’ control

ChatGPT: An open-source deep learning chatbot by OpenAI, first released to the public in November 2022. The current version is ChatGPT4

Chatbot: A computer program that uses AI and natural language processing to respond to human questions in real time

Clone: An AI clone uses voice and video data of a person to create an interactive digital version of that person

Convolutional neural network: An artificial neural network that can be trained to recognize objects or patterns, but is not predictive

Confabulate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “hallucinate.”

Confinement: Also known as AI capability control, AI confinement is a field related to alignment that aims to keep human control over AI systems.

Corpus: A large set of texts used to train an AI that uses natural language processing; these could be anything from social media posts to news articles to movies

Dall-E: OpenAI’s deep learning model for creating images

Data Dignity: A movement that advocates for the AI economy giving people control over their data and compensating them when data about or created by them is used

Data poisoning: A type of cyber attack where inaccurate or otherwise bad data is incorporated into an AI model’s training data set, causing it to give inaccurate or harmful results

Data mining: The process of analyzing datasets to discover new patterns that might improve the model

Defense Advanced Research Projects Agency (DARPA): The military research and development agency of the United States Department of Defense, a major AI and XAI researcher

Deep learning: An AI function of neural networks where a model learns how to respond based on data it’s given rather than simply performing what is programmed

Deepfake: Using AI to create video, images or voices that appear to be real but are not

Diffusion model: A generative AI model that can create high-resolution images by creating new data samples on top of the one they were trained on, leading to higher-quality images

Dream Studio: The web app of Stable Diffusion, a major deep learning text-to-image AI engine.

Explainable AI (XAI): A type of machine learning that designers can explain or interpret. The opposite of “Black Box AI”

Gemini: A Google language model powered by PaLM 2; unlike Bard, it has multimodal capability (text, image, sound and video)

Generative AI: AI that creates output, including text, images, music and video

Golden prompts: Prompts that have been engineered to give the user desirable results and can be used as a template for other prompts

Generative Pre-trained Transformer (GPT): OpenAI’s large language model on which the ChatGPT chatbot is built

Hallucinate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “confabulate”

Humanoid AI: A physical robot designed to look like a human with AI neural networks allowing it to interact with humans. Sophia and Ameca are examples of humanoids in development.

Hypothetical intelligence agent: Potential artificial general AI that rewrites its own code to become independent of human programming

Imagen: A text-to-image diffusion AI Image creator that outputs photo-realistic images

Language Model for Dialogue Applications (LaMDA): A Google language model designed to engage in conversations that naturally evolve from one subject to another

LAION: A German non-profit that makes open-source deep learning models, including the models Stable Diffusion and Imagen are built on; has met controversy for scraping images from art sites like ArtStation and Deviant Art.

Large Language Model Meta A (LLaMA): Meta’s large language model, released in February 2023

Large Language Model (LLM): A deep-learning transformer model that is trained to understand natural language and respond in a human-like way

Lensa: A Stable Diffusion-based photo and video filter program by Prisma Labs that uses AI to transform images/selfies; many AI filters are built into TikTok, where they are popular and free

Less is More for Alignment (LIMA): Meta’s newest language model, considered competitive with Bard and ChatGPT, built on its LLaMA LLM.

Long Short-Term Memory (LSTM): First developed in 1997, a variety of recurrent neural networks (RNNs) that are capable of learning long-term dependencies, especially in sequence prediction problems

Low-rank adaptation (LoRA): A Microsoft training method that freezes part of an LLM to make fine-tuning it more efficient and cost-effective

Machine learning: The process or field of developing artificial intelligence by feeding a computer data and using the results to improve and evolve the technology.

Massively Multilingual Speech (MMA): A text-to-speech/speech-to-text AI model that can process over 1,100 languages

Meta Megabyte: AI architecture by Meta AI that can process large volumes of data without breaking down the input into smaller units (tokenization)

Midjourney: A generative AI text-to-image platform by San Francisco research lab Midjourney, Inc. Users create AI images through its Discord.

Moat: Not exclusively an AI term, a moat is a competitive advantage an AI company has over its competitors when its proprietary technology creates a barrier for other companies from entering the market

Multimodal: An AI model that combines multiple types of data, including video, text, audio and images

Narrow AI: AI that is designed to perform a single or narrow range of tasks, such as search engines, virtual assistants and facial recognition software

Natural Language Processing (NLP): A type of linguistic computer science that programs computers to analyze and process natural language data, so, for example, Alexa can “listen” and respond to a human voice

Neural Network: A method in AI where computers are trained to process data like a human brain rather than a programmed machine. Deep learning models are made up of neural networks

Oracle: A hypothetical controlled AI platform that can only answer simple questions and can not grow its knowledge beyond its immediate environment

Output: What the AI creates when prompted; it could be text, image, music or video

PaLM 2: Google’s AI model, used for Bard, Gemini and other Google AI uses

Playground AI: A free (up to 1,000 images a day) AI art generator using Stable Diffusion

Prompt crafting: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt engineering,” sometimes preferred by people who use AI for creative uses

Prompt engineering: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt crafting,” sometimes preferred by people who use AI for technical uses

Prompt framework: An outline of a prompt that includes all of the steps and information to create a specific output

Reactive AI: AI that provides output based on the input it receives, but does not learn or evolve. Examples include spam filters and recommendations based on your activity

Recurrent neural network (RNN): An artificial neural network that recognizes recurring patterns and uses the data to predict what comes next, often used in speech recognition and natural language processing

Seed AI: A type of hypothetical intelligence agent that eventually does not need human intervention to learn new things

Self-awareness: A level of AI, currently only existing in science fiction, in which AI has a level of consciousness similar to human beings, with emotions and needs

Sophia: An advanced, socially intelligent humanoid robot created by Hong Kong-based Hanson Robotics 2016

Stable Diffusion: An open-source, deep learning, text-to-image model released in 2022 by Stability AI. In April 2023, a new version called SDXL was released in beta; its official web app is DreamStudio

Theory of mind (ToM): In AI, ToM, or “emotional intelligence,” is when a machine can recognize human emotions and adjust its behavior in response. Early ToM models include humanoid robots Ameca and Sophia

Tokenization: Splitting large volume input or output into smaller units in order to make them manageable by large language models

Transformer: A neural network invented and open source by Google Research in 2017. Chatbots including GPT-3, LaMDA and BERT were built on Transformer

Vicuna: An open-source chatbot by Meta Research that runs on Meta’s LLaMA-13B, considered a competitor of BARD and ChatGPT

Sorting through all the AI lingo? Here’s a glossary to help

The language of AI is riddled with acronyms, platform names, tech slang and theories. If you’ve ever overheard a conversation about AI and thought, “What the heck is Stable Diffusion, and how is it different from ChatGPT?” but were too afraid to ask, we’ve put together an AI glossary to help navigate some of the lingo and identify the tech companies behind which tech.

AI has been moving so quickly in 2023 that this list could be obsolete before long. There will most definitely be new terms emerging over the summer, and who knows where AI will be by the fall? But for now, we hope this helps:

AI Glossary

Act as if: A prompt starter for AI chatbots that has it respond as if it is something specific (e.g: job interviewer, therapist, fictional character)

Algorithm: Instructions that a computer program follows to operate on its own

Artificial general AI (AGI): An artificial intelligence system that can learn and adapt, as opposed to its capabilities being limited to what is programmed

Alignment: A field of research that aims to make sure AI aligns with human value codes; for example, AI models may be trained to refuse to tell a user how to build a bomb or steal data

Ameca: A humanoid robot designed by UK-based Engineering Arts as a platform for developing interactive AI

Artstation: The largest online digital artist community on the internet; “Trending on Artstation” is a common prompt for creating AI art

Autonomous: A robot, vehicle or device that operates without human control

Bard: Google’s AI chatbot, powered by PaLM 2. Bard is not an acronym; the chatbot is named after William Shakespeare, the “Bard of Avalon”

Bidirectional Encoder Representations from Transformers (BERT): A Google machine learning framework for natural language processing used since 2018 for tasks such as predicting text in search

Bias: When an AI algorithm produces systemically prejudiced results due to biases in the training data.

BingGPT: Bing’s ChatGPT-based chatbot

Black Box AI: A machine learning concept where developers do not control or understand how the AI model processes information. The opposite of “Explainable AI”

Blinding: A method where certain information is intentionally withheld from an AI to make it more challenging to exploit

Boxing: A method where an AI is isolated, for example, by not connecting it to the internet, to prevent it from potentially causing harm outside of its developers’ control

ChatGPT: An open-source deep learning chatbot by OpenAI, first released to the public in November 2022. The current version is ChatGPT4

Chatbot: A computer program that uses AI and natural language processing to respond to human questions in real time

Clone: An AI clone uses voice and video data of a person to create an interactive digital version of that person

Convolutional neural network: An artificial neural network that can be trained to recognize objects or patterns, but is not predictive

Confabulate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “hallucinate.”

Confinement: Also known as AI capability control, AI confinement is a field related to alignment that aims to keep human control over AI systems.

Corpus: A large set of texts used to train an AI that uses natural language processing; these could be anything from social media posts to news articles to movies

Dall-E: OpenAI’s deep learning model for creating images

Data Dignity: A movement that advocates for the AI economy giving people control over their data and compensating them when data about or created by them is used

Data poisoning: A type of cyber attack where inaccurate or otherwise bad data is incorporated into an AI model’s training data set, causing it to give inaccurate or harmful results

Data mining: The process of analyzing datasets to discover new patterns that might improve the model

Defense Advanced Research Projects Agency (DARPA): The military research and development agency of the United States Department of Defense, a major AI and XAI researcher

Deep learning: An AI function of neural networks where a model learns how to respond based on data it’s given rather than simply performing what is programmed

Deepfake: Using AI to create video, images or voices that appear to be real but are not

Diffusion model: A generative AI model that can create high-resolution images by creating new data samples on top of the one they were trained on, leading to higher-quality images

Dream Studio: The web app of Stable Diffusion, a major deep learning text-to-image AI engine.

Explainable AI (XAI): A type of machine learning that designers can explain or interpret. The opposite of “Black Box AI”

Gemini: A Google language model powered by PaLM 2; unlike Bard, it has multimodal capability (text, image, sound and video)

Generative AI: AI that creates output, including text, images, music and video

Golden prompts: Prompts that have been engineered to give the user desirable results and can be used as a template for other prompts

Generative Pre-trained Transformer (GPT): OpenAI’s large language model on which the ChatGPT chatbot is built

Hallucinate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “confabulate”

Humanoid AI: A physical robot designed to look like a human with AI neural networks allowing it to interact with humans. Sophia and Ameca are examples of humanoids in development.

Hypothetical intelligence agent: Potential artificial general AI that rewrites its own code to become independent of human programming

Imagen: A text-to-image diffusion AI Image creator that outputs photo-realistic images

Language Model for Dialogue Applications (LaMDA): A Google language model designed to engage in conversations that naturally evolve from one subject to another

LAION: A German non-profit that makes open-source deep learning models, including the models Stable Diffusion and Imagen are built on; has met controversy for scraping images from art sites like ArtStation and Deviant Art.

Large Language Model Meta A (LLaMA): Meta’s large language model, released in February 2023

Large Language Model (LLM): A deep-learning transformer model that is trained to understand natural language and respond in a human-like way

Lensa: A Stable Diffusion-based photo and video filter program by Prisma Labs that uses AI to transform images/selfies; many AI filters are built into TikTok, where they are popular and free

Less is More for Alignment (LIMA): Meta’s newest language model, considered competitive with Bard and ChatGPT, built on its LLaMA LLM.

Long Short-Term Memory (LSTM): First developed in 1997, a variety of recurrent neural networks (RNNs) that are capable of learning long-term dependencies, especially in sequence prediction problems

Low-rank adaptation (LoRA): A Microsoft training method that freezes part of an LLM to make fine-tuning it more efficient and cost-effective

Machine learning: The process or field of developing artificial intelligence by feeding a computer data and using the results to improve and evolve the technology.

Massively Multilingual Speech (MMA): A text-to-speech/speech-to-text AI model that can process over 1,100 languages

Meta Megabyte: AI architecture by Meta AI that can process large volumes of data without breaking down the input into smaller units (tokenization)

Midjourney: A generative AI text-to-image platform by San Francisco research lab Midjourney, Inc. Users create AI images through its Discord.

Moat: Not exclusively an AI term, a moat is a competitive advantage an AI company has over its competitors when its proprietary technology creates a barrier for other companies from entering the market

Multimodal: An AI model that combines multiple types of data, including video, text, audio and images

Narrow AI: AI that is designed to perform a single or narrow range of tasks, such as search engines, virtual assistants and facial recognition software

Natural Language Processing (NLP): A type of linguistic computer science that programs computers to analyze and process natural language data, so, for example, Alexa can “listen” and respond to a human voice

Neural Network: A method in AI where computers are trained to process data like a human brain rather than a programmed machine. Deep learning models are made up of neural networks

Oracle: A hypothetical controlled AI platform that can only answer simple questions and can not grow its knowledge beyond its immediate environment

Output: What the AI creates when prompted; it could be text, image, music or video

PaLM 2: Google’s AI model, used for Bard, Gemini and other Google AI uses

Playground AI: A free (up to 1,000 images a day) AI art generator using Stable Diffusion

Prompt crafting: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt engineering,” sometimes preferred by people who use AI for creative uses

Prompt engineering: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt crafting,” sometimes preferred by people who use AI for technical uses

Prompt framework: An outline of a prompt that includes all of the steps and information to create a specific output

Reactive AI: AI that provides output based on the input it receives, but does not learn or evolve. Examples include spam filters and recommendations based on your activity

Recurrent neural network (RNN): An artificial neural network that recognizes recurring patterns and uses the data to predict what comes next, often used in speech recognition and natural language processing

Seed AI: A type of hypothetical intelligence agent that eventually does not need human intervention to learn new things

Self-awareness: A level of AI, currently only existing in science fiction, in which AI has a level of consciousness similar to human beings, with emotions and needs

Sophia: An advanced, socially intelligent humanoid robot created by Hong Kong-based Hanson Robotics 2016

Stable Diffusion: An open-source, deep learning, text-to-image model released in 2022 by Stability AI. In April 2023, a new version called SDXL was released in beta; its official web app is DreamStudio

Theory of mind (ToM): In AI, ToM, or “emotional intelligence,” is when a machine can recognize human emotions and adjust its behavior in response. Early ToM models include humanoid robots Ameca and Sophia

Tokenization: Splitting large volume input or output into smaller units in order to make them manageable by large language models

Transformer: A neural network invented and open source by Google Research in 2017. Chatbots including GPT-3, LaMDA and BERT were built on Transformer

Vicuna: An open-source chatbot by Meta Research that runs on Meta’s LLaMA-13B, considered a competitor of BARD and ChatGPT

Sorting through all the AI lingo? Here’s a glossary to help