Making AI delivery robots disability-friendly and ‘cautious pedestrians’
By Beth Rose
BBC Access AllThe company behind AI robots which deliver shopping to your door has said it “constantly” talks to disabled people to ensure safety.
The knee-high machines from Starship Technologies can carry three bags across town.
They use the same pavements as pedestrians and a new panel advises on collision avoidance
Now in Wakefield, the team say “lived experience” and knowledge of disability is at the heart of its operation.
They look like freezer coolers on wheels, Lisa Johnson head of public affairs at Starship Technologies says. “It trundles along on its six little wheels and it can climb up and down the kerbs as well.”
But as a robot designed to use pavements, it could have become another frustrating obstacle for disabled people to navigate, such as abandoned bikes, e-scooters and street furniture.But Lisa told the BBC’s Access All podcast that some safety solutions have already been put in place and the robots had been programmed to be “cautious pedestrians”.
They use obstacle avoidance technology – sensors and a camera – to track what is moving towards it, and how quickly.
“Its job is to stay out of your way,” she adds.This video can not be played
To play this video you need to enable JavaScript in your browser.
One of the scenarios the company has focused on is what happens when a wheelchair-user and robot come across each other on a narrow path.
A similar problem made headlines in America in 2019 when a student at the University of Pittsburgh tweeted she had been trapped on a road as traffic approached because a Starship Technologies robot was blocking the only accessible entrance to the sidewalk.
At the time, she told the local radio station, 90.5 WESA: “It was really bizarre to realize that a non-sentient thing was putting me in danger and making me feel I was helpless. I think I was just laughing at it like, ‘Oh cool, this is my life right now’.”The robots were removed for several days. And after reviewing the footage of the incident, the company released a statement saying it disputed the student had been impeded from getting on the sidewalk, but it did update its mapping system.
Lisa says, since that incident “we spent a lot of time having the robots learn what mobility devices look like” and the robots now know to get out of the way.
If it can’t get out of the way on its own, human back-up will always be nearby to step-in to assist.
One mobility aid it currently struggles to recognise is a white cane, used by blind and visually impaired people.
“Canes are really thin,” Lisa says. “And the robots don’t encounter canes very often. So we’ve got to make sure we keep having these interactions so the robots can understand what canes are.”
After more on-the-job learning, it is hoped the robots will detect a cane and make its presence known with a spoken message: “Hi, I’m a Starship robot, I’m just letting you know that I’m here.”Steve Tyler, director of assistive technology at the charity, Leonard Cheshire, is one of those who signed up to Starship Technologies’ Disability Advisory Panel. He is blind himself.
“There are lots of opportunities, [but] there are also lots of threats,” Steve says of the rapidly-developing technology. “We need to be involved from the outset as a disability community to ensure that we drive some of what is delivered.”
One element Steve is keen to see improved is the arrival of the device at someone’s home. Currently, once you lift the lid to retrieve your shopping it plays a song of your choice.
But how would a blind person know it was there?
“You might want a signal before that happens, so you know where it is,” he advises.
Although this technology might seem futuristic, Steve says it is essential everyone is involved in the conversation around such technology, as it has the potential to quickly become the norm and impact how we all live in the future.
“These technologies not only bring accessibility closer to clients that need it, but it also has an impact on, potentially, how we develop cities and towns, how we lay out pavements, how we lay out shared spaces.”
As for the song it sings as you open its lid to retrieve your shopping, that’s also become a contentious issue, according to Lisa.
“One of our most popular songs at the moment is Baby Shark. Is that a plus or a minus? I don’t know at this point.”
You can listen to the podcast and find information and support on the Access All homepage.
Sorting through all the AI lingo? Here’s a glossary to help
Hey, did you hear about LIMA? It’s built on the LLM LLaMA, not to be confused with LaMDA.
The language of AI is riddled with acronyms, platform names, tech slang and theories. If you’ve ever overheard a conversation about AI and thought, “What the heck is Stable Diffusion, and how is it different from ChatGPT?” but were too afraid to ask, we’ve put together an AI glossary to help navigate some of the lingo and identify the tech companies behind which tech.AI has been moving so quickly in 2023 that this list could be obsolete before long. There will most definitely be new terms emerging over the summer, and who knows where AI will be by the fall? But for now, we hope this helps:
AI Glossary
Act as if: A prompt starter for AI chatbots that has it respond as if it is something specific (e.g: job interviewer, therapist, fictional character)
Algorithm: Instructions that a computer program follows to operate on its own
Artificial general AI (AGI): An artificial intelligence system that can learn and adapt, as opposed to its capabilities being limited to what is programmed
Alignment: A field of research that aims to make sure AI aligns with human value codes; for example, AI models may be trained to refuse to tell a user how to build a bomb or steal data
Ameca: A humanoid robot designed by UK-based Engineering Arts as a platform for developing interactive AI
Artstation: The largest online digital artist community on the internet; “Trending on Artstation” is a common prompt for creating AI art
Autonomous: A robot, vehicle or device that operates without human control
Bard: Google’s AI chatbot, powered by PaLM 2. Bard is not an acronym; the chatbot is named after William Shakespeare, the “Bard of Avalon”
Bidirectional Encoder Representations from Transformers (BERT): A Google machine learning framework for natural language processing used since 2018 for tasks such as predicting text in search
Bias: When an AI algorithm produces systemically prejudiced results due to biases in the training data.
BingGPT: Bing’s ChatGPT-based chatbot
Black Box AI: A machine learning concept where developers do not control or understand how the AI model processes information. The opposite of “Explainable AI”
Blinding: A method where certain information is intentionally withheld from an AI to make it more challenging to exploit
Boxing: A method where an AI is isolated, for example, by not connecting it to the internet, to prevent it from potentially causing harm outside of its developers’ control
ChatGPT: An open-source deep learning chatbot by OpenAI, first released to the public in November 2022. The current version is ChatGPT4
Chatbot: A computer program that uses AI and natural language processing to respond to human questions in real time
Clone: An AI clone uses voice and video data of a person to create an interactive digital version of that person
Convolutional neural network: An artificial neural network that can be trained to recognize objects or patterns, but is not predictive
Confabulate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “hallucinate.”
Confinement: Also known as AI capability control, AI confinement is a field related to alignment that aims to keep human control over AI systems.
Corpus: A large set of texts used to train an AI that uses natural language processing; these could be anything from social media posts to news articles to movies
Dall-E: OpenAI’s deep learning model for creating images
Data Dignity: A movement that advocates for the AI economy giving people control over their data and compensating them when data about or created by them is used
Data poisoning: A type of cyber attack where inaccurate or otherwise bad data is incorporated into an AI model’s training data set, causing it to give inaccurate or harmful results
Data mining: The process of analyzing datasets to discover new patterns that might improve the model
Defense Advanced Research Projects Agency (DARPA): The military research and development agency of the United States Department of Defense, a major AI and XAI researcher
Deep learning: An AI function of neural networks where a model learns how to respond based on data it’s given rather than simply performing what is programmed
Deepfake: Using AI to create video, images or voices that appear to be real but are not
Diffusion model: A generative AI model that can create high-resolution images by creating new data samples on top of the one they were trained on, leading to higher-quality images
Dream Studio: The web app of Stable Diffusion, a major deep learning text-to-image AI engine.
Explainable AI (XAI): A type of machine learning that designers can explain or interpret. The opposite of “Black Box AI”
Gemini: A Google language model powered by PaLM 2; unlike Bard, it has multimodal capability (text, image, sound and video)
Generative AI: AI that creates output, including text, images, music and video
Golden prompts: Prompts that have been engineered to give the user desirable results and can be used as a template for other prompts
Generative Pre-trained Transformer (GPT): OpenAI’s large language model on which the ChatGPT chatbot is built
Hallucinate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “confabulate”
Humanoid AI: A physical robot designed to look like a human with AI neural networks allowing it to interact with humans. Sophia and Ameca are examples of humanoids in development.
Hypothetical intelligence agent: Potential artificial general AI that rewrites its own code to become independent of human programming
Imagen: A text-to-image diffusion AI Image creator that outputs photo-realistic images
Language Model for Dialogue Applications (LaMDA): A Google language model designed to engage in conversations that naturally evolve from one subject to another
LAION: A German non-profit that makes open-source deep learning models, including the models Stable Diffusion and Imagen are built on; has met controversy for scraping images from art sites like ArtStation and Deviant Art.
Large Language Model Meta A (LLaMA): Meta’s large language model, released in February 2023
Large Language Model (LLM): A deep-learning transformer model that is trained to understand natural language and respond in a human-like way
Lensa: A Stable Diffusion-based photo and video filter program by Prisma Labs that uses AI to transform images/selfies; many AI filters are built into TikTok, where they are popular and free
Less is More for Alignment (LIMA): Meta’s newest language model, considered competitive with Bard and ChatGPT, built on its LLaMA LLM.
Long Short-Term Memory (LSTM): First developed in 1997, a variety of recurrent neural networks (RNNs) that are capable of learning long-term dependencies, especially in sequence prediction problems
Low-rank adaptation (LoRA): A Microsoft training method that freezes part of an LLM to make fine-tuning it more efficient and cost-effective
Machine learning: The process or field of developing artificial intelligence by feeding a computer data and using the results to improve and evolve the technology.
Massively Multilingual Speech (MMA): A text-to-speech/speech-to-text AI model that can process over 1,100 languages
Meta Megabyte: AI architecture by Meta AI that can process large volumes of data without breaking down the input into smaller units (tokenization)
Midjourney: A generative AI text-to-image platform by San Francisco research lab Midjourney, Inc. Users create AI images through its Discord.
Moat: Not exclusively an AI term, a moat is a competitive advantage an AI company has over its competitors when its proprietary technology creates a barrier for other companies from entering the market
Multimodal: An AI model that combines multiple types of data, including video, text, audio and images
Narrow AI: AI that is designed to perform a single or narrow range of tasks, such as search engines, virtual assistants and facial recognition software
Natural Language Processing (NLP): A type of linguistic computer science that programs computers to analyze and process natural language data, so, for example, Alexa can “listen” and respond to a human voice
Neural Network: A method in AI where computers are trained to process data like a human brain rather than a programmed machine. Deep learning models are made up of neural networks
Oracle: A hypothetical controlled AI platform that can only answer simple questions and can not grow its knowledge beyond its immediate environment
Output: What the AI creates when prompted; it could be text, image, music or video
PaLM 2: Google’s AI model, used for Bard, Gemini and other Google AI uses
Playground AI: A free (up to 1,000 images a day) AI art generator using Stable Diffusion
Prompt crafting: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt engineering,” sometimes preferred by people who use AI for creative uses
Prompt engineering: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt crafting,” sometimes preferred by people who use AI for technical uses
Prompt framework: An outline of a prompt that includes all of the steps and information to create a specific output
Reactive AI: AI that provides output based on the input it receives, but does not learn or evolve. Examples include spam filters and recommendations based on your activity
Recurrent neural network (RNN): An artificial neural network that recognizes recurring patterns and uses the data to predict what comes next, often used in speech recognition and natural language processing
Seed AI: A type of hypothetical intelligence agent that eventually does not need human intervention to learn new things
Self-awareness: A level of AI, currently only existing in science fiction, in which AI has a level of consciousness similar to human beings, with emotions and needs
Sophia: An advanced, socially intelligent humanoid robot created by Hong Kong-based Hanson Robotics 2016
Stable Diffusion: An open-source, deep learning, text-to-image model released in 2022 by Stability AI. In April 2023, a new version called SDXL was released in beta; its official web app is DreamStudio
Theory of mind (ToM): In AI, ToM, or “emotional intelligence,” is when a machine can recognize human emotions and adjust its behavior in response. Early ToM models include humanoid robots Ameca and Sophia
Tokenization: Splitting large volume input or output into smaller units in order to make them manageable by large language models
Transformer: A neural network invented and open source by Google Research in 2017. Chatbots including GPT-3, LaMDA and BERT were built on Transformer
Vicuna: An open-source chatbot by Meta Research that runs on Meta’s LLaMA-13B, considered a competitor of BARD and ChatGPT
The language of AI is riddled with acronyms, platform names, tech slang and theories. If you’ve ever overheard a conversation about AI and thought, “What the heck is Stable Diffusion, and how is it different from ChatGPT?” but were too afraid to ask, we’ve put together an AI glossary to help navigate some of the lingo and identify the tech companies behind which tech.
AI has been moving so quickly in 2023 that this list could be obsolete before long. There will most definitely be new terms emerging over the summer, and who knows where AI will be by the fall? But for now, we hope this helps:
AI Glossary
Act as if: A prompt starter for AI chatbots that has it respond as if it is something specific (e.g: job interviewer, therapist, fictional character)
Algorithm: Instructions that a computer program follows to operate on its own
Artificial general AI (AGI): An artificial intelligence system that can learn and adapt, as opposed to its capabilities being limited to what is programmed
Alignment: A field of research that aims to make sure AI aligns with human value codes; for example, AI models may be trained to refuse to tell a user how to build a bomb or steal data
Ameca: A humanoid robot designed by UK-based Engineering Arts as a platform for developing interactive AI
Artstation: The largest online digital artist community on the internet; “Trending on Artstation” is a common prompt for creating AI art
Autonomous: A robot, vehicle or device that operates without human control
Bard: Google’s AI chatbot, powered by PaLM 2. Bard is not an acronym; the chatbot is named after William Shakespeare, the “Bard of Avalon”
Bidirectional Encoder Representations from Transformers (BERT): A Google machine learning framework for natural language processing used since 2018 for tasks such as predicting text in search
Bias: When an AI algorithm produces systemically prejudiced results due to biases in the training data.
BingGPT: Bing’s ChatGPT-based chatbot
Black Box AI: A machine learning concept where developers do not control or understand how the AI model processes information. The opposite of “Explainable AI”
Blinding: A method where certain information is intentionally withheld from an AI to make it more challenging to exploit
Boxing: A method where an AI is isolated, for example, by not connecting it to the internet, to prevent it from potentially causing harm outside of its developers’ control
ChatGPT: An open-source deep learning chatbot by OpenAI, first released to the public in November 2022. The current version is ChatGPT4
Chatbot: A computer program that uses AI and natural language processing to respond to human questions in real time
Clone: An AI clone uses voice and video data of a person to create an interactive digital version of that person
Convolutional neural network: An artificial neural network that can be trained to recognize objects or patterns, but is not predictive
Confabulate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “hallucinate.”
Confinement: Also known as AI capability control, AI confinement is a field related to alignment that aims to keep human control over AI systems.
Corpus: A large set of texts used to train an AI that uses natural language processing; these could be anything from social media posts to news articles to movies
Dall-E: OpenAI’s deep learning model for creating images
Data Dignity: A movement that advocates for the AI economy giving people control over their data and compensating them when data about or created by them is used
Data poisoning: A type of cyber attack where inaccurate or otherwise bad data is incorporated into an AI model’s training data set, causing it to give inaccurate or harmful results
Data mining: The process of analyzing datasets to discover new patterns that might improve the model
Defense Advanced Research Projects Agency (DARPA): The military research and development agency of the United States Department of Defense, a major AI and XAI researcher
Deep learning: An AI function of neural networks where a model learns how to respond based on data it’s given rather than simply performing what is programmed
Deepfake: Using AI to create video, images or voices that appear to be real but are not
Diffusion model: A generative AI model that can create high-resolution images by creating new data samples on top of the one they were trained on, leading to higher-quality images
Dream Studio: The web app of Stable Diffusion, a major deep learning text-to-image AI engine.
Explainable AI (XAI): A type of machine learning that designers can explain or interpret. The opposite of “Black Box AI”
Gemini: A Google language model powered by PaLM 2; unlike Bard, it has multimodal capability (text, image, sound and video)
Generative AI: AI that creates output, including text, images, music and video
Golden prompts: Prompts that have been engineered to give the user desirable results and can be used as a template for other prompts
Generative Pre-trained Transformer (GPT): OpenAI’s large language model on which the ChatGPT chatbot is built
Hallucinate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “confabulate”
Humanoid AI: A physical robot designed to look like a human with AI neural networks allowing it to interact with humans. Sophia and Ameca are examples of humanoids in development.
Hypothetical intelligence agent: Potential artificial general AI that rewrites its own code to become independent of human programming
Imagen: A text-to-image diffusion AI Image creator that outputs photo-realistic images
Language Model for Dialogue Applications (LaMDA): A Google language model designed to engage in conversations that naturally evolve from one subject to another
LAION: A German non-profit that makes open-source deep learning models, including the models Stable Diffusion and Imagen are built on; has met controversy for scraping images from art sites like ArtStation and Deviant Art.
Large Language Model Meta A (LLaMA): Meta’s large language model, released in February 2023
Large Language Model (LLM): A deep-learning transformer model that is trained to understand natural language and respond in a human-like way
Lensa: A Stable Diffusion-based photo and video filter program by Prisma Labs that uses AI to transform images/selfies; many AI filters are built into TikTok, where they are popular and free
Less is More for Alignment (LIMA): Meta’s newest language model, considered competitive with Bard and ChatGPT, built on its LLaMA LLM.
Long Short-Term Memory (LSTM): First developed in 1997, a variety of recurrent neural networks (RNNs) that are capable of learning long-term dependencies, especially in sequence prediction problems
Low-rank adaptation (LoRA): A Microsoft training method that freezes part of an LLM to make fine-tuning it more efficient and cost-effective
Machine learning: The process or field of developing artificial intelligence by feeding a computer data and using the results to improve and evolve the technology.
Massively Multilingual Speech (MMA): A text-to-speech/speech-to-text AI model that can process over 1,100 languages
Meta Megabyte: AI architecture by Meta AI that can process large volumes of data without breaking down the input into smaller units (tokenization)
Midjourney: A generative AI text-to-image platform by San Francisco research lab Midjourney, Inc. Users create AI images through its Discord.
Moat: Not exclusively an AI term, a moat is a competitive advantage an AI company has over its competitors when its proprietary technology creates a barrier for other companies from entering the market
Multimodal: An AI model that combines multiple types of data, including video, text, audio and images
Narrow AI: AI that is designed to perform a single or narrow range of tasks, such as search engines, virtual assistants and facial recognition software
Natural Language Processing (NLP): A type of linguistic computer science that programs computers to analyze and process natural language data, so, for example, Alexa can “listen” and respond to a human voice
Neural Network: A method in AI where computers are trained to process data like a human brain rather than a programmed machine. Deep learning models are made up of neural networks
Oracle: A hypothetical controlled AI platform that can only answer simple questions and can not grow its knowledge beyond its immediate environment
Output: What the AI creates when prompted; it could be text, image, music or video
PaLM 2: Google’s AI model, used for Bard, Gemini and other Google AI uses
Playground AI: A free (up to 1,000 images a day) AI art generator using Stable Diffusion
Prompt crafting: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt engineering,” sometimes preferred by people who use AI for creative uses
Prompt engineering: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt crafting,” sometimes preferred by people who use AI for technical uses
Prompt framework: An outline of a prompt that includes all of the steps and information to create a specific output
Reactive AI: AI that provides output based on the input it receives, but does not learn or evolve. Examples include spam filters and recommendations based on your activity
Recurrent neural network (RNN): An artificial neural network that recognizes recurring patterns and uses the data to predict what comes next, often used in speech recognition and natural language processing
Seed AI: A type of hypothetical intelligence agent that eventually does not need human intervention to learn new things
Self-awareness: A level of AI, currently only existing in science fiction, in which AI has a level of consciousness similar to human beings, with emotions and needs
Sophia: An advanced, socially intelligent humanoid robot created by Hong Kong-based Hanson Robotics 2016
Stable Diffusion: An open-source, deep learning, text-to-image model released in 2022 by Stability AI. In April 2023, a new version called SDXL was released in beta; its official web app is DreamStudio
Theory of mind (ToM): In AI, ToM, or “emotional intelligence,” is when a machine can recognize human emotions and adjust its behavior in response. Early ToM models include humanoid robots Ameca and Sophia
Tokenization: Splitting large volume input or output into smaller units in order to make them manageable by large language models
Transformer: A neural network invented and open source by Google Research in 2017. Chatbots including GPT-3, LaMDA and BERT were built on Transformer
Vicuna: An open-source chatbot by Meta Research that runs on Meta’s LLaMA-13B, considered a competitor of BARD and ChatGPT
AI? Artificial Intelligence? Accessible Intelligence!
AI? Artificial Intelligence? Accessible Intelligence!
What We Can Look Forward To in the New AI World
The Journey
For the uninitiated, the concept of AI could seem new, but I can remember learning the term in the late 1980s when I took a course in LISP. LISP is a computer language originally developed over sixty years ago at MIT and was the birth of artificial intelligence (AI) and still remains prevalent in AI research today.
The problem back then was that there was no practical AI-based application that we could easily understand and embrace. What is new today is that AI is finally mainstream and being given front-and-center attention because of its practical applications that are really helping people.
Like anything in life, there is a good and a bad side, a positive and negative to everything. There is a lot of negative press in the media on the bad side of AI and how if not properly managed can hurt society. I am not discounting many of these reports, but for this article, I am going to comment on the positive side of AI’s contributions to people’s lives and, more specifically, those who live with daily challenges, sometimes known as disabilities.
In this post, I will look at:
How AI is already helping people with disabilities
The future of AI for people living with disabilities and how it can help
My wish list for the future: AI-based applications to improve the lives of people who are blind
My quick interaction with ChatGPT
How AI is Already Helping People With DisabilitiesWhile artificial intelligence can enhance everyone’s lives, given we properly manage its progression, it has an especially important role to play in the lives of people with disabilities. AI helps the hard of hearing communicate, and low vision and blind to see or receive visual information in a format they can understand.
For many years AI has already been a big player in the image description business of automatically generating alternative text for images. Apple, Microsoft, and Google all have algorithms that can provide a high-level description of an image with ever-improving accuracy. Microsoft’s Seeing AI app on iOS was a revolutionary tool when it launched in 2018 and to this day continues to positively impact many people’s lives who are challenged with vision problems. It continues to improve its functionality thanks to Microsoft’s continued support of the free product.
Google’s Live Translate tool on the Android platform has been around for more than five years and in real-time can caption a live conversation for the deaf. It is just one of many Google AI-based products positively impacting the Deaf and hard-of-hearing community. In many cases, innovative technology develops for the disability community many years before it reaches the masses and is a common topic of discussion in the mainstream media. Audiobooks were developed for the blind in the 1930s, decades before the general public adopted the audiobook as a mainstream tool for enjoying reading without looking at a page.
AI is just the next step in the journey of embracing even more technology into our lives. This technology, while it allows us to think less in some ways, also requires us to be smarter about the world around us if we want to keep up with all the change.
AI is a tool. And like any tool it requires you to learn about it, respect it, and know when and when not to use it. It is no different than social media or the onset of the personal computer or the internet. Each progression through this journey required us to get smarter and use the tools as intended all the while watching out for negative bumps in the technological road of life.
The future of AI-based applications to assist the blind
If you have been paying attention to the conversation in the media recently on the topic of AI you are already aware of how it is achieving things like:
Making people smarter by extending people’s brain power to collect, organize and present research in a convenient form (ChatGPT)
Helping people to do things they couldn’t do previously by putting talents at your fingertips that were never there before
Solving some of the world’s biggest problems like climate change
Making learning easier
Advancing medical breakthroughs
There are so many different challenges AI can help people with visual disabilities to overcome, thereby positively impacting how we live life. Below are just some ideas; I am sure there are many more things we haven’t even imagined yet. Let’s take a look at some that have practical uses for people like me who live with a vision disability but can also apply to the sighted community, too.Patrick’s AI Wish List:
Autonomous Cars: First on my list is to one day have a self-driving car. I haven’t given up on this dream although I do realize it will take time. AI will play a big part in making autonomous driving a reality.
Document Creation: I do a lot of writing for the sighted world, just like this blog post. Many documents I create are highly formatted for their visual appearance so I normally turn them over to a co-worker for final inspection (with an eye for cosmetic opportunities to improve). A good-looking, well-formatted document that is visually appealing to the eye is something AI should be able to help with. AI can learn about my document style preferences for fonts, colors, headings, images, tables and either point out deviations or missing formatting. It should automatically correct problems like unexpected font changes or suggest making something a heading to improve navigation. I bet sighted people would love having a visual format checker working with them like their spell checker.
Picture This: Many of us aren’t artists and even simple sketches are beyond some people’s ability, no matter what their level of vision is. While AI-based systems are already out there that can draw for you based on a specific art style, what we need are AI-based drawing and sketching systems that take our written text or verbal input to create the drawing. This would allow people who are blind that have an idea to easily talk to their computer in natural language and have a piece of simple art or sketched idea created for them. This would also allow for better communication when needing to visualize a concept for a person with sight.
Advanced AI-Based Video Analysis: There is so much AI can do to help make advanced live video stream processing reality for both at home and on the go now that 5G connectivity is beginning to give us the power to move large quantities of data over the cellular network. Some examples include:
Personal trainer: For those of you who have used a personal trainer, you know among many things they do for you is watch your form, how you execute an exercise, and provide you queues on how to improve the overall movement to receive the most benefit from the activity while staying safe and avoiding injuries. This normally involves body positioning and how your body moves through space. An example of this is the squat. Proper form dictates the path your body moves in and if you don’t do it correctly over time you can hurt yourself. Software can learn to do the same thing the trainer does. Some work is already being done in this area by major players in fitness and I am hopeful it will be inclusive of people who are blind desiring advanced training assistance.
Room with a View: Here is a dream that is less work-oriented but still very important for that connection to the outside world that some sighted people take for granted. We have image detectors that can tell us what a photo looks like at a high level, but I want more! I want AI to watch a live video feed and tell me what is going on outside my window. I want to hear about what the sky looks like, if the leaves are turning colors, if my flowers are blooming, if there are birds flying around and if so what kind. If the grass is not as green as it should be or if it needs to be cut, I should know this. AI can do all this once we start using to help people with day-to-day activities.
The Great Outdoors: Speaking of outdoor environments, AI should also help us be more secure in many ways while we are at home. Live video feeds interfacing with AI-based systems can detect potential danger in our environment. Here are some examples of where AI smart cameras can alert us to potential danger:
People approaching that are not recognized. Camera-based doorbells already can start to do some of this but we can do more. Systems can learn who should be near us and who may be unwelcome guests. Multiple cameras providing a perimeter video capture can work together to determine movement and alert as needed
Smoke or fire approaching your home could be an alert even before your smoke detector inside notices it
Animals that may cause harm like bears, mountain lions, snakes, or even unknown dogs could alert the homeowner to a potentially dangerous situation outside their door
Anytime a watchful eye (or second set of eyes) is needed, AI-based solutions could help out both the blind and sighted consumer.Getting Personal: As I mentioned earlier, for many years we have had image detection systems that can analyze a photo, tell you the rough age, facial expression, presence of glasses, etc. of a person. Current technology can even image match and tell you if the person is in your contact list. What it can’t do yet is provide a detailed analysis of the person including eye color, skin tone, body type, clothing, posture, jewelry, etc.Continuing down the advanced image detection path, here is another opportunity to provide an AI-based product that would help the blind but also really help everyone.
Some Assembly Required? This next image detection scenario would be helpful for both the visually challenged and the sighted. I can’t tell you how many times I have purchased something requiring assembly that I have to figure out without the aid of instructions. Normally, I can achieve the end result without assistance, but it does take a lot longer and more brain power to go from a bag of parts to the end product. Taking a picture of a diagram or scanning a QR Code to get verbal instructions on how to assemble is the goal and anyone would appreciate that level of service.
Free Movement: Now for my most practical request, the ability to navigate a new indoor space independently without assistance from a human, or relying on GPS or Bluetooth beacon technology. While right now I can do this with the AIRA or Be My Eyes application, I am reliant on a person to be available and responsible for associated costs, if applicable. Image detection should be able to identify objects, signs, and dangerous situations. It would be smart enough to pull data from different sources along with advanced vision analysis to give me the feedback I need to move around independently, find what I need, and tell me about any people or obstacles nearby.
If you look at the above list, every idea presented would be appreciated by a person with vision issues and also by someone with perfect vision. I am a firm believer that good digital accessibility helps everyone, you may just not realize it. Many of the above items on my wish list will help everyone if you look at them with an open mind and think about what the possibilities can bring us.The ChatGPT Challenge
With all the talk recently in the media on ChatGPT, I will admit I, too, was curious as to what all the hype was about. I will admit until only a week ago I didn’t even have a ChatGPT account and I have only asked it for two reports so I am a newcomer to this latest tech. I wondered what ChatGPT could do for me. Could it write an article I could publish, thus saving me all that time and work? Based upon the title of this blog posting and the topic that motivated me to develop this blog I asked ChatGPT to create a report. I am including it below and I will let you be the judge as to which blog post you enjoyed reading more: my commentary or the wonderfully crafted grammar perfect, factually correct (?) report I received from the AI engine. You be the judge.
Beginning of Chat GPT Interaction
Question Asked: write me a paper on artificial intelligence and digital accessibility inclusion for disabled
ChatGPT Response:
Introduction:
Artificial Intelligence (AI) is revolutionizing the way we interact with technology. With the power of AI, we can automate repetitive tasks, predict outcomes, and improve decision-making. However, the advancements in AI have not been evenly distributed across all populations. People with disabilities have faced significant barriers to access digital technologies due to lack of accessibility features. The purpose of this paper is to explore the role of AI in digital accessibility and how it can be used to promote inclusion for people with disabilities.
Background:
Accessibility in digital technologies refers to the design and development of software, hardware, and websites to make them usable for people with disabilities. The term “disability” covers a wide range of impairments, including visual, auditory, physical, and cognitive. Accessibility features are essential for people with disabilities to participate in society and access information, education, and job opportunities.
However, creating accessible digital technologies can be challenging, and many developers are not aware of the needs of people with disabilities. As a result, people with disabilities are often excluded from using digital technologies or face significant barriers to access them.
Role of AI in Digital Accessibility:
AI has the potential to revolutionize digital accessibility by automating the creation of accessibility features and making them more personalized. Here are some ways AI can promote accessibility:
Automatic Captioning and Audio Description: AI-powered captioning and audio description can make videos and podcasts accessible to people with hearing impairments or visual impairments. The AI algorithms can analyze the audio and video content and create captions and descriptions automatically, without the need for human intervention.
Voice Recognition: AI-powered voice recognition can make it easier for people with physical disabilities to interact with digital technologies. People with motor impairments can use their voice to control their devices, browse the internet, and send messages.
Natural Language Processing (NLP): NLP is a branch of AI that focuses on the interaction between computers and humans using natural language. NLP can be used to create accessible interfaces for people with cognitive impairments. For example, chatbots and virtual assistants can use NLP to understand and respond to users’ questions in a way that is easy to understand.
Personalization: AI can make digital technologies more personalized for people with disabilities. For example, an AI-powered website can adapt its design and functionality to the user’s needs based on their disability. A website can increase the font size, use high-contrast colors, or provide alternative navigation methods for people with visual impairments.
Challenges:
Despite the potential of AI to promote digital accessibility, there are still significant challenges to overcome. Here are some of the challenges:
Data Bias: AI algorithms are only as good as the data they are trained on. If the data is biased, the AI algorithms will also be biased. For example, if an AI-powered captioning system is trained on a dataset that does not include captions for people with regional accents or dialects, the system will not be able to generate accurate captions for those users.
Lack of Awareness: Many developers are not aware of the needs of people with disabilities and the importance of creating accessible digital technologies. This lack of awareness can lead to the exclusion of people with disabilities from using digital technologies.
Cost: Creating accessible digital technologies can be costly, and many organizations may not have the resources to invest in accessibility features.
Conclusion:
AI has the potential to revolutionize digital accessibility and promote inclusion for people with disabilities. However, to achieve this, we need to address the challenges and ensure that AI is used in an ethical and inclusive manner. By creating accessible digital technologies, we can ensure that people with disabilities have equal access to information, education, and job opportunities.
End of Chat GPT interaction
Some questions to think about:
Does the above ChatGPT report provide value? Yes
Is the report easy to understand and well organized? Yes
Does the report provide relevant information you can use? Yes
Could you put this report together after spending an afternoon Googling and reading articles? Yes
Does the report provide you with the perspective of another human who has unique experiences, ideas and aspirations? Probably not. You be the judge and tell me. I would enjoy your comments and would love to know which blog posting you found to be more enjoyable, mine or ChatGPT’s.
Final ThoughtsThe future of AI is a bright one if you ask me. Digital accessibility is the gap between mainstream technology and assistive technology. AI is helping and will continue to help close that gap until there is no friction left between technologies and everything will be accessible to people with disabilities. This presumes we are not left out of the conversation when it comes to the future development of all AI-based systems. AI that presents interfaces that aren’t inclusive of all people’s abilities will cause problems.
AI-based tools that generate code for developers must be taught to create WCAG-compliant code and be manually tested to ensure the user experience is good, all so no one is left behind. There is a lot of work to do in this space even when AI tools are automatically developing systems for use by humans. The real question is whether or not all of us will be at the AI table providing input so this next frontier of technology development benefits everyone. Let’s all keep an eye on the latest AI developments, continue learning, and ensure your voice is heard at the AI table as we continue growing in this wonderful space.
https://www.deque.com/blog/ai-artificial-intelligence-accessible-intelligence/
5 Signs You Need to Be Tested for Hearing Loss – CNET
If you find yourself saying “what?” a lot, you might be starting to wonder about your hearing. You’re not alone. Roughly 15% of American adults have some level of hearing loss. The good news? Now that you can access over-the-counter hearing aids, improving your hearing can be pretty simple.
So should you get a hearing test? To answer that, it’ll help to understand the signs of hearing loss, the importance of regular hearing checkups and what to expect from them. Let’s look at all of that here.
What is a hearing test?
Before we get into the signs you might have hearing loss, let’s ease any fears you might have about a hearing screening. If you’re feeling uneasy because you don’t know what to expect, we can help.
Hearing loss tests are quick, painless processes. Usually, you’ll be done in a half hour or less.
Your certified audiologist generally tailors the hearing screening to you. Some of the more common types of hearing tests include:
Pure-tone tests that evaluate the quietest volume at which you can hear certain pitches.
Speech tests during which you listen to and try your best to repeat words and phrases.
Tests to see if there’s fluid or wax buildup in your ear, like bone conduction testing and tympanometry.
More specialized testing like auditory brainstem response (ABR) or otoacoustic emissions (OAE) tests.
Pure-tone testing and speech testing are the most common hearing tests. With a pure-tone hearing test, you sit in a soundproof room. You’ll wear a set of headphones through which your audiologist sends sounds at different pitches and volumes to one ear at a time. You signal when you hear the sound, usually by raising a hand or pressing a button.
Speech testing functions similarly, but instead of hearing tones, you hear words and are asked to repeat them. Your audiologist evaluates the lowest volume at which you can still clearly make out the words and phrases.
The American Speech-Language-Hearing Association recommends that all adults get hearing loss tests every decade until age 50, after which point they should get regular hearing checkups every three years. Remember, getting hearing aids is easier now thanks to over-the-counter options, so there’s no reason to drag your heels here.
Importance of regular hearing checkupsdjgunner/E+/Getty Images
Hearing loss generally happens gradually. In fact, it can happen so slowly that you don’t realize that your hearing has deteriorated over time. The ASHA reports that adults wait an average of nearly nine years before addressing their hearing problems, largely because of this issue.
In other words, even if you think your hearing is fine, you might benefit from periodic hearing loss tests. Under age 50, you can go 10 years between hearing checkups unless you notice any of the signs we’re about to outline. Because hearing often deteriorates with age, though, you should bump up to regular hearing checkups on a triennial basis once you hit your 50th birthday.
If you skip hearing screening, you might inadvertently change your life to make up for hearing loss. You might avoid crowded settings because you have trouble hearing with background noise, for example, or increase the volume in your headphones, which can further damage your hearing. Ultimately, hearing tests can help you not just check how your ears function and also protect them and your quality of life.
5 signs you need a hearing test
If you’re due for a hearing test per the ASHA recommendation, schedule yours. Even if you haven’t crossed the recommended timeline, you might need to get your ears checked if these signs sound familiar:
Saying “what?” a lot — If you have to ask people to repeat themselves on a regular basis, it likely means your ears aren’t working as well as they should be.
Hearing ringing in your ears — Hearing loss often goes hand in hand with tinnitus, which causes that persistent ringing. So even if you think you’re hearing fine, if you’ve got a regular ringing in your ears, it’s worth talking to an audiologist.
Pumping up the volume — Do you often listen to your headphones at or near full volume? Do you turn up the volume on the TV and in the car? If so, you might be compensating for hearing loss.
Struggling to hear amid background noise — Does your ability to hear clearly deteriorate as background noise increases? If you struggle at parties and other loud environments, you’re likely living with some level of hearing loss.
Embarrassment in social situations because you can’t hear — Sometimes, difficulty hearing can lead to social gaffes. If you’ve ever wrongly assumed someone said something because you didn’t want to ask them to repeat themselves, for example, hearing loss could be to blame.
The National Institute on Deafness and Other Communication Disorders has an online questionnaire you can take to help you find out if you would benefit from a hearing screening.
If you think you do need a hearing test, the ASHA recommends that Americans get hearing loss tests performed by a certified audiologist. Your doctor should be able to refer you to one.
Bottom lineurbanzon/E+/Getty Images
Many adults live with some level of hearing loss, often without knowing it. That’s why regular hearing checkups matter. If any of the signs we just laid out sound familiar, schedule a hearing test for yourself.
Plus, if you catch your hearing loss early when it’s mild to moderate, over-the-counter hearing aids can likely help. Before you jump in with them, though, see a certified audiologist and weigh some key considerations.https://www.cnet.com/health/medical/5-signs-you-need-to-be-tested-for-hearing-loss/#ftag=CAD590a51e
Autosum in Excel Spreadsheets – Sharon’s Shortcuts
Autosum in Excel Spreadsheets – Sharon’s Shortcuts
— Read on sharons-shortcuts.ie/autosum-in-excel-spreadsheets/
https://www.makeuseof.com/online-directories-of-ai-tools-search-app/
6 Online Directories of AI Tools to Discover or Search for the Best AI App
Published 2 hours ago
These free directories list all the AI tools available online, so you can browse or search for them quickly and easily.It seems like every other day we see new apps that use artificial intelligence or machine-learning technology to make complex tasks easier. OpenAI’s ChatGPT and its many uses are impressive obviously, but there are several other AI models that take tools beyond just chat-based applications.
The rapid rate at which these AI tools are being released makes it hard to keep track of them all, and to find the right app quickly when you need it. A few fans of the technology are making this easier with AI app directories. Some build it themselves, others crowd-source it, and some even shun websites for simple Notion databases. No matter what, it’s easier than before to find an AI tool with these free directories.
1. Futurepedia (Web): Large and Daily Updated Directory of AI Tools
Futurepedia is one of the largest AI tools directory on the internet with almost 3,000 entries, and keeps increasing its size by adding new apps daily. In fact, right at the top, you can check tools added today (with a badge indicating how many they are), as well as check out the latest news for anyone interested in using or developing AI tools.
The app uses a search engine that packs in some AI magic for you to use natural-language terms, like “I want help with my Math homework.” You can also browse or filter the website with tags such as type of pricing, mobile app, open source, browser extension, no-signup required, etc.
Each tool Futurepedia shows is in the form of a card with basic information like a short description, how many users have liked or bookmarked it, and whether it’s free, freemium, or paid. You can click the card for a longer description of the app, reviews by Futurepedia users, as well as recommendations for alternative AI tools.
2. TopAI.tools (Web): Discover AI Tools, and Create and Export Shortlists
TopAI.tools has a collection of over 3800 AI tools in its directory across several categories such as AI Detection, Art, Audio, Avatars, Business, Chat, Coaching, Data Analysis, Design, Development, Education, Email, Finance, Gaming, Images, Legal, Marketing, Music, Podcasting, Productivity, Prompt Guides, Research, SEO, Social Media, Speech, Translation, Video, and Writing. Each tool card shows you the price, a short description and image, and other non-category tags such as no-code, students, etc.
All cards also have the option to like or bookmark them for later, as well as a checkbox for Shortlist. You can view your shortlist at any time, but if you’re registered, you can save these AI tool lists for later and even export or share them with friends. It’s a cool way to build a set of apps that you want to recommend to others, as a manager to a team or a teacher to students.
3. ToolScout (Web): Discover Random AI Tools and Built-In Chatbot
ToolScout has a large directory of AI tools across categories like Image, 3D, SEO, Customer Support, Email, Personalized Video, Research, Sales, Design, Search Engine, Assistant, Avatar, Writing, Marketing, Fun, Logo, Audio, Video, Speech, Gaming, Code, Text, and Music. You can sort the list by popular, newest, or verified tools, and filter it by price type.
Click the “Discover” button at any time to be taken to a random AI tool from the directory, working like a StumbleUpon for AI apps that you wouldn’t come across otherwise. ToolScout also comes with a built-in chatbot to talk to it like ChatGPT. Instead of traditional search, this is a new way to find information from within the site.
4. iLib (Web): Most Popular AI Tools and Sites Based on Monthly Traffic
While popularity isn’t the yardstick of being the best quality technology, it is still a good indicator of which apps or tools are worth your time. The folks at iLib have gathered over 3,000 of the internet’s favorite AI tools and ranked them by their monthly online visitors. You can view this list with columns like traffic rank, monthly traffic, category, and description, with a maximum of 200 entries per page.
The main directory at iLib has a large collection of over 4,500 AI tools, spread across 100+ categories for easy browsing. You can filter the tools by payment type (freemium, free, free trial, paid, payment required, subscription, free options) or by product type (tool, generative AI, tutorial, API, Saas, app, extension, prompts). Unlike other such sites, iLib offers all the information about the tool in the card; if you click it, you’ll be taken to the tool and not to any page with additional information.
5. AIcyclopedia (Web): Directory of AI Tools, AI Podcasts, and AI Prompts
Another AI tool directory site, AIcyclopedia doesn’t offer a whole lot of information about the AI tool in the card, such as pricing information or categories and tags. In fact, you can’t even filter the directory with such different parameters, which is common on other sites. You can browse by paid tools or free tools, but there aren’t further refinements. So why should you check out AIcyclopedia?
First, it doesn’t restrict itself to AI tools alone. The directory also includes several podcasts related to AI, whether on development, discussing tools and uses, and news and interviews with experts. And AIcyclopedia also has a collection of effective ChatGPT prompts and other templates to use with AI tools to get useful answers.
Second, AIcyclopedia has two types of search engines baked into the site, which you can toggle to switch between. You can search the tools like a traditional search engine with keywords; or you can use natural language to get search results that match what you are looking for.
6. AI Infinity (Web): Free Notion Database of AI Tools
AI Infinity looks and feels like a small-time project to gather the best AI tools out there and make them easy to find. And that’s why it seems like they have chosen a simple Notion database to list the directory. But surprisingly, it actually makes it a great way to browse these tools.
See, the default view in all other AI tools directories is the cards format. No one offers a table with columns of all the tools, so that you can quickly compare them—apart from AI Infinity. Information is kept to a minimum, showing just the tool name, categories, price, URL, and description. You can filter or sort the list by category, price, or date. And like with any Notion database, you can duplicate it for yourself or export the data to use as you want.
For Best Results, Mix AI and Manual Tools
As you browse or search these directories, you’ll be surprised by how many amazing AI tools there are now. They undoubtedly reduce your workload, but you’ll also find that the technology isn’t infallible. In most cases, it’s best to pair the AI tool with a different manual tool. Take the results from the AI tool, and then apply the final finishing touches yourself in any app of your choice.
About The Author
Mihir Patkar has been writing on technology and productivity for over 14 years at some of the top media publications across the world. He has an academic background in journalism.
New to Being Blind: How Lenovo’s Accessibility Features Help Overcome Vision Loss
Lenovo’s Product Diversity Office was formed in 2020 to ensure that Lenovo’s emerging technologies work for people of all backgrounds and abilities. While the team works to eliminate bias, they also look for improvements that provide a competitive user experience for people with disabilities.
— Read on finance.yahoo.com/news/being-blind-lenovo-accessibility-features-161500505.html
Trivia11y: A Web Accessibility Quiz by Sparkbox
Test Your Accessibility Knowledge
Accessibility is important, but becoming an expert means taking the time to understand and memorize priorities, regulations, requirements, and techniques. So how can you determine whether you’re up to speed? With games, of course!
— Read on trivia11y.com/
With a US market share of 63.4% (as of 2022), Amazon Audiobook is quite popular among users. It allows users to access several podcasts, audiobooks, and classics. As for their subscription plans, earlier, you needed to pay $14.95 monthly as a subscription fee and receive one credit. For two credits, you needed to pay a bit more. However, now there are two different subscription plans available: Audible Plus and Premium Plus. What’s the difference between the two? Let’s find out:
Different Prices
Audible Plus is the cheapest plan for Amazon’s Audible, and it starts at $7.95/per month. At this rate, users can access about 11,000 titles from 500,000 contents that are available in the store. Meanwhile, Premium Plus is another plan for Amazon’s Audible, and it starts at $14.95/per month (and $22.95/per month for two credits). Although the price is literally double the Audible Plus, you will have access to double the contents and audiobooks. The catalog includes discounted rates for the remaining 450,000 audiobooks.
You will receive one credit (or two, depending on the plan) monthly which you can use to buy any book in the Audible store. You can save/download books on your device for offline reading, even when the subscription plan gets canceled.
Multiple Benefits
Audible Premium Plus subscription allows you to exchange the purchased titles and get exclusive offers (about 30%) on the books. These discounts are applicable to additional titles and not to the free catalog. You can also listen to these audiobooks across multiple devices as required. Meanwhile, for Audible Plus, your access will be limited to the titles present in the “Included” section of the plan. Also, you may not get the benefits of high-end offers and discounts.
Members-only Content
If you purchase the Premium Plus subscription, you will be able to access the members-only content too. However, for the Audible Plus subscription, you won’t be able to access the members-only content.
Meanwhile, Audible Plus has some exclusive content to check out. You can find popular titles like The Queen’s Gambit and Anne of Green Gables. It also has diverse podcasts for multiple genres, including spirituality, crime, and true stories. Learn about the difference between podcasts vs. audiobooks here.
While the list includes audiobooks from famous creators, it also has Audibles’ original creations. Note that the contents for the Audible Plus may not change as it gets updated in the Premium Plus.
Audible Plus Vs. Premium Plus: Which One’s Better?
Choosing the right subscription plan depends on your requirements. If you want to listen to a plethora of audiobooks with multiple credit access, try out the Premium Plus version of the Audibles. Meanwhile, if you already have a full-fledged library and just want to fill in the gap, go for the Audible Plus subscription plan. It’s a good deal at an affordable price. Overall, both the subscription plans of Amazon Audible meet the demands of audiobook usage, which has increased by 70% in 2022.
Navkiran Dhaliwal is a seasoned content writer with 10+ years of experience. When she’s not writing, she can be found cooking up a storm or spending time with her dog, Rain.