Featured

A Guide for Windows Screen Reader Users Transitioning to Mac

 Dr. Elijah Irwin, 

For many blind users, moving from a Windows system to a Mac can be both exciting and a bit daunting. One of the most common questions is about screen reader compatibility and how familiar Windows-based workflows translate to macOS.

On Windows, screen readers like JAWS and NVDA allow for a more linear navigation experience. Users can move through elements in a direct, step-by-step manner, using the Tab key and standard arrow navigation. This flat structure is familiar and feels fast for many users.

On the Mac, Apple’s VoiceOver introduces a different approach: hierarchical navigation. Many elements are grouped into containers like toolbars, tables, or sidebars. To access what’s inside them, users must interact using:

VO + Shift + Down Arrow (to interact)

VO + Shift + Up Arrow (to stop interacting)

This interaction model may feel like an extra step at first, especially for those used to just arrowing through everything. But once understood, it offers more control and structure—especially in complex apps.

Some new users try to avoid interaction altogether by relying on the Tab key or customizing settings. But even with tweaks, interacting is often necessary, and embracing it leads to smoother navigation overall.

VoiceOver’s design is built to offer focus and clarity when working within grouped content. Rather than flattening everything like on Windows, VoiceOver encourages working inside structured containers. This has benefits, especially once you’re used to how it works.

For those considering the switch, here are a few simple tips:

Practice Interaction

Use VO + Shift + Down Arrow to interact and VO + Shift + Up Arrow to stop. It becomes natural with time.

Learn the Rotor

VO + U opens the Rotor. It lets you jump to headings, links, form controls, and more—great for navigating quickly.

Use VoiceOver Help

Press VO + H to explore help options and practice commands. You can even use VO + K to practice keystrokes in a safe space.

Expect Differences in Microsoft Office

While Word, Excel, and PowerPoint are available on Mac, the navigation is not identical to Windows. The Mac version requires getting used to new layouts and VoiceOver interaction, especially in ribbon menus and dialogues.

Consider Native Mac Apps

If your needs are basic, Apple’s own Pages, Numbers, and Keynote may be easier to use with VoiceOver. These apps follow VoiceOver’s structure more naturally and integrate better with macOS.

Be Patient

There’s a learning curve, but with practice, you’ll become fluent. Many users find VoiceOver powerful once they adapt to its logic.

In short, the Mac is different—but not impossible. Once you get used to interacting, using the Rotor, and learning the new layout styles, it can be just as productive as Windows, with the added benefit of Apple’s tight integration and consistent design.

Written by Dr. Elijah Irwin, 

A seasoned Apple Mac user and Windows screen reader adventurer

Why Blind Shoppers Can’t Complete Online Checkout

Online shopping promises convenience. Browse. Add to cart. Pay. Done.

But for many blind shoppers who depend on screen readers, the checkout stage is where the process breaks down completely. What should be a simple transaction turns into a frustrating dead end.

True accessibility does not stop at product pages. A fully accessible checkout must allow a shopper to review items in the cart, confirm selected sizes or colors, adjust quantities, remove products, apply discount codes, select a payment method, and complete payment independently.

If even one of these steps does not function properly with assistive technology, the entire checkout experience fails.

One major barrier is unlabeled controls. Buttons for actions like removing an item or updating quantity may appear visually clear, but without proper coding labels, a screen reader may announce only “button” or provide no meaningful description. Without context, blind shoppers cannot confidently manage their cart.

Another common issue is missing feedback on selected product variations. If a customer chooses a specific size or color, that selection must be clearly communicated. When this information is only shown visually and not announced to screen reader users, they cannot confirm whether their order is accurate.

Dynamic updates within the cart also cause problems. Items may become unavailable. Shipping options may change. Errors may appear. If these updates are not properly announced through accessible alerts, screen reader users remain unaware that something has changed or that an action is required.

Promo code functionality frequently creates additional obstacles. Input fields may lack proper labels. The “Apply” button may not respond correctly. Error or confirmation messages may appear visually but are not read aloud. This leaves blind shoppers uncertain whether a discount has been accepted or rejected.

Payment selection can be equally problematic. Many checkout pages display payment options as images, such as credit card logos or digital wallet icons. Without descriptive text alternatives, screen readers may simply announce “graphic,” making it unclear which payment method is being selected.

When these barriers appear, the impact is significant. Blind shoppers may abandon their carts, make unintended purchases, or lose trust in the retailer. For businesses, this translates directly into lost sales and reputational harm.

Automated accessibility testing tools often fail to detect these real-world usability breakdowns because they focus on technical compliance rather than the lived experience of navigating a full checkout workflow with a screen reader.

An accessible checkout must function smoothly at every step. Controls must be labeled clearly. Form fields must be usable. Status updates must be announced. Payment options must be understandable.

If any of these elements are missing, the sale cannot be completed.

Source: https://blog.usablenet.com/shopping-cart-accessibility-screen-reader-barriers

How to Turn Off Gmail Smart Features with Screen Readers

 

Source:

How to opt out and turn off all ‘smart’ AI features in Gmail

Introduction

 

Gmail enables several smart features by default. These include Smart Compose, Smart Reply, and other Google Workspace features that analyze your activity to offer suggestions.

For many screen reader users these features can feel distracting unpredictable or unnecessary. Some users also prefer to limit background data processing.

This guide walks you through turning these features off step by step using common screen readers.  What This Guide helps you achieve;

 Turns Off

Smart Compose

Smart Reply

Smart features in Gmail Chat and Meet

Google Workspace smart features

Smart features in other Google products.

  Section 1

Gmail App on iPhone

VoiceOver iOS 1. Open the Gmail app. 2. Swipe right until you hear Menu button.

Double tap to open it. 3. Swipe right to Settings and double tap. 4. Select your email account. Turning off Smart Reply and Smart Compose 5. Swipe right to Smart Reply.

Double tap to turn it off. 6. Swipe right to Smart Compose.

Double tap to turn it off. Turning off Smart features 7. Swipe right to Data privacy and double tap. 8. Swipe right to Smart features.

Double tap to turn Smart features off. 9. Locate Google Workspace smart features and double tap. 10. Turn off Smart features in Google Workspace. 11. Turn off Smart features in other Google products. 12. Activate Done to finish.  Section 2

Gmail App on Android

TalkBack 1. Open the Gmail app. 2. Swipe right to the Menu button and double tap. 3. Swipe to Settings and double tap. 4. Select your email account. Turning off Smart Reply and Smart Compose 5. Swipe to Smart Reply.

Double tap to turn it off. 6. Swipe to Smart Compose.

Double tap to turn it off. Turning off Smart features 7. Swipe to Data privacy and double tap. 8. Swipe to Smart features.

Double tap to disable it. 9. Open Google Workspace smart features. 10. Disable Smart features in Google Workspace. 11. Disable Smart features in other Google products. 12. Use the Back gesture until you return to the inbox.  Section 3

Gmail on Mac

VoiceOver macOS 1. Open Gmail in your web browser. 2. Navigate to the Gmail page content. 3. VO right until you find the Settings gear button. 4. Press VO space to activate it. 5. VO right to See all settings and activate. General tab 6. VO right to Smart Compose.

Choose Off. 7. VO right to Smart Reply.

Choose Off. Smart features 8. VO right to Turn on smart features in Gmail Chat and Meet. 9. If it is checked press VO space to uncheck it. Workspace smart features 10. VO right to Manage Workspace smart feature settings and activate. 11. Turn off Smart features in Google Workspace. 12. Turn off Smart features in other Google products. Save changes 13. VO right to Save changes. 14. Press VO space to save.  Section 4

Gmail on Windows

NVDA 1. Open your browser and go to mail.google.com. 2. Press Tab until you reach the Settings gear. 3. Press Enter. 4. Tab to See all settings and press Enter. General tab 5. Tab to Smart Compose.

Use Space or arrow keys to set it to Off. 6. Tab to Smart Reply.

Set it to Off. Smart features 7. Tab to Turn on smart features in Gmail Chat and Meet. 8. Press Space to uncheck it. Workspace smart features 9. Tab to Manage Workspace smart feature settings. 10. Press Enter. 11. Tab to Smart features in Google Workspace.

Press Space to turn it off. 12. Tab to Smart features in other Google products.

Press Space to turn it off. Save changes 13. Tab to Save changes. 14. Press Enter.  Section 5

Checking That Everything Is Off

On iPhone and Android

Reopen Gmail Settings.

Confirm that Smart Reply Smart Compose and all smart feature options remain turned off.

On Mac and Windows

Refresh Gmail.

Reopen Settings and confirm that Smart Compose Smart Reply and all smart features are still disabled.  Closing Notes

Once these options are turned off Gmail behaves more like a traditional email app.

The interface becomes calmer with fewer interruptions and more predictable behavior.

This setup works well for many screen reader users who value consistency clarity and control.

How to design great alt text: An introduction

Writing Effective Alt Text: More Than a Checkbox Images need alternative text, commonly known as alt text. This is often one of the first accessibility concepts designers and developers learn. On the surface, it seems simple and usually is easy to implement. Automated tools can quickly detect whether alt text exists.

The real challenge is not adding alt text, but deciding when it is needed and how to write it well.

Good alt text requires understanding two things: who the text is for and why the image exists. Who Uses Alt Text 

Alt text is primarily used by people who rely on screen readers to access websites, apps, and software. This includes people who are blind and people with low vision who may not be able to see images clearly enough to understand them.

If an image communicates information that is not available elsewhere on the page, a user who cannot see that image will miss that information entirely if alt text is missing.

Alt text is also useful for people with slow or unreliable internet connections. Images can be turned off in most browsers to speed up page loading, and when this happens, the alt text is shown in place of the image.

Some people with cognitive disabilities choose to turn off images to reduce distractions and make content easier to process.

On the other hand, people with learning disabilities may benefit from images that support written text. This highlights an important truth about accessibility: different users have different needs, and thoughtful image use matters.

Alt text also benefits search engines. It helps them understand image content and contributes to search engine optimization.

Do All Images Need Alt Text Yes. Every image should include an alt attribute.

This does not mean every image needs a description. Decorative images should use an empty alt attribute. This tells screen readers to ignore the image. Without it, a screen reader may announce the image file name, which creates a poor experience.

In accessibility, images generally fall into two categories: decorative and informative. Decorative vs Informative Images Decorative images exist purely for visual appeal or repeat information already available in text. Removing them does not reduce understanding. These images should use empty alt text.

Informative images convey meaning or information that is not otherwise available. If removing the image would cause a loss of information, the image is informative and needs descriptive alt text.

A simple test helps. Imagine the image is removed. If something important is missing, the image is informative.

Context is critical. A large background banner may add visual interest but no meaning. A logo identifies the site. The same type of image can be decorative in one context and informative in another. Images Used With Text Images often appear alongside headlines, captions, or product descriptions. In many cases, the surrounding text already explains what the image shows.

When this happens, adding alt text that repeats the same information can be redundant. Sometimes the image exists mainly to attract attention or encourage a click rather than to convey details.

Whether such images need alt text depends on purpose and context. There is no single rule that applies in every case. Images That Almost Always Need Alt Text Some image types are almost always informative:

Images that act as links or buttons

Images that contain important text

Logos

Even for these, it is still important to check context. If nearby HTML text provides the same information, duplicating it in alt text may not be necessary. Images Used as Links or Buttons Clickable images are common, especially on marketing and e-commerce sites. Screen readers always announce links and buttons. If a linked image has no alt text, a user may only hear the word “link,” with no idea what it does.

When writing alt text for clickable images:

The image should have alt text or be part of a clickable area that includes descriptive HTML text

The text should make clear what will happen when the link or button is activated

If there is no visible text, alt text is essential

If users cannot understand a link or button, they are unlikely to use it. Images of Text Whenever possible, text should be rendered using HTML rather than embedded inside images. HTML text is easier to access and customize.

In marketing, designers sometimes rely on visual text styles that are difficult to reproduce with CSS. In these cases, teams may use a single image that includes both text and visuals and provide the text using alt text.

While this helps screen reader users, it still fails accessibility requirements for many low vision users who need control over font size, color, and contrast. It is sometimes used as a practical shortcut, but it is not ideal. Logos Logos almost always need alt text. Even when a logo is mostly text, recreating it accurately in HTML often compromises design or branding.

At a minimum, the alt text should be the company or product name. If the logo links to the homepage, a clearer description such as the company name followed by “home” provides a better experience. Icons Are Images Too Icons are often overlooked, but they are images and follow the same rules.

Ask yourself:

Is there text next to the icon

Does that text clearly explain the icon’s purpose

Is the icon clickable

Icons without labels are common, but they create accessibility issues. Many icons can represent multiple actions. A gear icon, for example, might mean settings, preferences, or tools.

Including text labels next to icons improves clarity, increases the clickable area, and benefits everyone. If labels are not possible, the icon should be clear, large enough to interact with easily, and include meaningful alt text. When Alt Text Should Be Written and By Whom Alt text should be written as early as possible.

If you created the image, you should write the alt text.

If you introduced the image in a wireframe or mockup, you should write the alt text.

If you are a developer implementing an existing design, the alt text should already exist. Your responsibility is to ensure it is implemented correctly.

Designers can include alt text as annotations in wireframes. Content creators should write alt text or captions while drafting content.

Alt text written long after the design phase is usually less effective because the original context has been lost. As with most accessibility work, it is faster and better when done upfront. Finding the Right Level of Detail Good alt text focuses on what matters.

Too little detail is unhelpful.

Too much detail is overwhelming.

For example, “Photo of a house” provides very little value. A more useful description might mention the type of house, the setting, and one or two key features. At the same time, listing every architectural detail is unnecessary unless those details are important to the content.

The goal is clarity, not exhaustiveness. Why Alt Text Matters Writing alt text forces you to think about the purpose and meaning of every image you include. This process often leads to better design and better content.

While writing alt text, you may realize an icon is unclear and needs to be redesigned. While drafting a blog post, you may struggle to describe an image and decide it does not add value after all.

By thinking carefully about images and their impact on different users, you create stronger, more inclusive experiences. Images are powerful. The more intention you bring to them, the more effective they become.

The Evolution of the Mac Finder Smiley Face Icon

As a blind Mac user who relies on VoiceOver every day, I may not see the Finder’s smiling face, but I know it as well as anyone who does. For those of us using a screen reader, the Finder is more than just an app or an icon. It is the gateway to everything on the Mac, the place where our files live, where navigation begins, and where Apple’s design philosophy truly meets accessibility.

Even though I do not see the smile, I can feel what it stands for: friendliness, simplicity, and the welcoming tone that has defined the Mac since 1984.

The Origins – The Happy Mac

When Apple introduced the first Macintosh in 1984, it started up with a smiling computer known as the Happy Mac. That little symbol appeared at boot up to let users know the system had successfully loaded. It was more than a technical indicator. It was Apple’s way of saying, “Welcome.”

As the Mac evolved, that welcoming spirit carried forward into the Finder, the app that manages all your files and folders. Its smiling blue and white face became the lasting emblem of the Mac desktop, a visual expression of the friendliness many of us sense through VoiceOver every time we press Command and Tab and hear “Finder.”

Some design historians have linked the split face design, half light blue and half dark blue, to Pablo Picasso’s minimalist line art, especially his piece Deux personnages (Two Characters, 1934). Whether or not that is true, it makes sense. The design mirrors Apple’s belief that technology can be both artistic and human.  Redesign and Refinement Through the Mac OS Eras

When Mac OS X arrived in 2001, Apple redesigned the Finder icon for the Aqua interface. It became softer, shinier, and full of gradients, yet the familiar smile stayed right where it belonged.

Over the years, as macOS moved from versions like Jaguar and Panther to Catalina and Big Sur, Apple refined the Finder’s look again and again. The textures changed, the lighting shifted, and the lines became cleaner. Through every change, the icon remained instantly recognizable.

By 2020, when macOS Big Sur arrived, Apple simplified the Finder icon even further. It became flatter and brighter to match the company’s move toward a more unified, minimal design. For sighted users, the smile looked fresher. For VoiceOver users like me, it still sounded the same: “Finder.” A name that means home base, consistency, and reliability.  Modern Era and macOS Tahoe

In early 2025, Apple briefly made a design tweak in macOS Tahoe that surprised longtime users. The company swapped the two shades of blue on the Finder face, putting the darker color on the right instead of the left.

It was a small change, but it drew attention, showing how even subtle adjustments can stir emotion among Mac users. The reaction was so strong that Apple quickly switched it back in the next beta.

That story reminds me that design, whether visual or functional, has a powerful emotional impact. For blind users, that same care is reflected in how VoiceOver reads the Finder’s layout and communicates structure clearly. Accessibility, like the Finder’s smile, is about making the experience friendly and human.  Why the Finder Face Still Matters

The Finder smile is more than an image on a Dock. It represents the core of Apple’s design philosophy, that technology should feel approachable, intuitive, and kind.

For me, as a blind Mac user, that idea extends beyond visuals. It is in the smoothness of navigation, the logical layout of folders, and the fact that VoiceOver lets me manage files with the same confidence as anyone else.

The Finder icon has changed styles many times, but its spirit has never shifted. It still welcomes every Mac user, sighted or blind, with the same silent message it always has: “You are home.”  Sources and Further Reading

Finder on your Mac – Apple Support: https://support.apple.com/guide/mac-studio/finder-apddf030866a/mac

Wikipedia – Finder (software): https://en.wikipedia.org/wiki/Finder_(software)

Macworld – The Finder Icon and the Influence of Fine Art on the Mac: https://www.macworld.com/article/225475/the-finder-icon-and-the-influence-of-fine-art-on-the-mac.html

AppleInsider – macOS Tahoe Beta 2 Swaps Finder Icon Colors Back After Historic Design Fumble: https://appleinsider.com/articles/25/06/23/macos-tahoe-beta-2-swaps-finder-icon-colors-back-after-historic-design-fumble

The Verge – macOS Tahoe Finder Icon Beta Color Change Coverage: https://www.theverge.com/news/691643/apple-macos-tahoe-26-finder-icon-beta

Eclectic Light – A Brief History of the Finder: https://eclecticlight.co/2025/02/01/a-brief-history-of-the-finder/

Basic Apple Guy – macOS Icon History: https://basicappleguy.com/basicappleblog/macos-icon-history   

By Elijah Irwin

macOS Tahoe vs Windows 11: Deciding the Ultimate Desktop OS

Apple’s latest operating system, macOS 26 “Tahoe” , and Microsoft’s Windows 11 represent two different visions of the modern desktop. Both deliver polished experiences, but they excel in different ways depending on what you need from your computer.

macOS Tahoe introduces a fresh Liquid Glass design , bringing translucency and depth across menus, sidebars, and app windows. Apple has added more customization options, from app icon tints to a redesigned Control Center where controls can be rearranged or added directly to the menu bar. A new Phone app brings call management and voicemail features to the Mac, while Live Activities from iPhone now show up on the desktop for seamless continuity. Spotlight search has also become smarter, letting users run actions directly and filter results more precisely. For gamers, Tahoe offers a Game Library, a new overlay, and support for MetalFX Frame Interpolation , aiming to smooth gameplay and boost performance on Apple silicon Macs. Importantly, Tahoe marks the final major update for Intel Macs , as Apple shifts entirely to its own chips.

Windows 11, on the other hand, continues to play to its strengths. It runs across a huge variety of hardware, from budget laptops to custom-built gaming PCs, offering unmatched compatibility and flexibility . Microsoft has invested heavily in gaming, with broad support for titles, accessories, and advanced technologies like DirectStorage. For businesses and enterprises, Windows 11 remains the leader with powerful management tools, extensive legacy app support, and mature security controls. Its Snap Layouts and multitasking features make it attractive to power users juggling multiple apps or displays. Microsoft has also leaned into AI, with Copilot integrated across the system, offering productivity shortcuts for those comfortable with cloud-based features.

So which is the “ultimate” desktop OS? If you live in Apple’s ecosystem, value design consistency, and want tight integration with your iPhone or iPad, macOS Tahoe is the natural choice—though be prepared to move away from Intel hardware. If you need flexibility, gaming support, or rely on legacy or specialized software, Windows 11 remains the more versatile platform. Both are excellent in their domains, and the right choice comes down to whether you prioritize Apple’s seamless ecosystem or Windows’ breadth of compatibility and customization. 

Original source: https://www.pcmag.com/comparisons/macos-tahoe-vs-windows-11-deciding-the-ultimate-desktop-os

Wellness Wednesday… on a Friday?

Beth and Jeff step up to the mics—without Robin this time—to explore whether the word wellness has lost some of its meaning. Is wellness about carefully balancing all the little pieces of life—mind, body, and spirit—or is it simply how you feel about yourself day to day? Together, Beth and Jeff unpack the buzz around this popular term, sharing their own perspectives on what wellness means in practice. Tune in for a thoughtful, down-to-earth conversation that may just reshape how you think about your own well-being.

 Wellness Wednesday on a Friday?

You Don’t Have to Be Blind to Use a Screen Reader

August 18, 2025

A smiling dark-skinned woman with curly hair listens to her phone through white earbuds, alongside the text, “Discover how screen readers can make life easier for everyone,” on a gray background.

screen reader is a piece of assistive technology software that turns on-screen text and interface elements into speech or braille output. It works by sending information from the operating system, applications, and web browsers through an accessibility API, which the screen reader interprets and then reads aloud or displays on a refreshable braille device. With keyboard commands or touch gestures, users can navigate headings, links, buttons, forms, and other elements, making it possible to interact with a computer or smartphone without needing to rely on vision.

More than a tool for blind users

When most people hear “screen reader,” they picture someone who is completely blind using it to access a computer or phone. That’s true for many—but far from all—users. The most recent WebAIM Screen Reader User Survey found that 23.4% of screen reader users are not blind.

Some have low vision, some have dyslexia or other reading differences, and some simply prefer the flexibility of audio. Others are developers, designers, content creators, or testers who use screen readers as part of their work.

I’m one of them. I have low vision and am legally blind, but I still read with magnification and zoom. Even so, I often use screen readers and text-to-speech because they’re faster, easier on my eyes, and more comfortable for long stretches. For me—and for many others—screen readers aren’t about replacing sight, but about expanding options.

Listening as speed reading

On my Apple devices, I’ve set up easy-to-use shortcuts to activate the built-in VoiceOver screen reader and set its speaking rate to 85% of its maximum speech rate. For context, typical human conversation is around 150 words per minute. At 85% of VoiceOver’s top speed, I’m hearing words roughly three to four times faster than that.

It didn’t start this way. I began at a comfortable pace, then gradually increased the speed over time. My brain learned to process synthetic speech the same way you might adapt to a fast talker. Now, that pace feels normal, and I can move through emails, articles, and reports in a fraction of the time.

Speed aside, the other benefit is comfort. If my eyes are tired from hours of visual work, I can switch to listening mode and keep going without strain or headaches. It’s a tool I can pick up whenever it fits the task.

Who else benefits from screen readers

Plenty of people beyond the blind community use screen readers or similar tools:

  • People with low vision: Alternating between magnification and audio can prevent fatigue and headaches.
  • Individuals with dyslexia or other learning differences: Listening can make text easier to process and understand. (More from dyslexia.com.)
  • Multitaskers: Screen readers let you consume text while cooking, walking, cleaning, or commuting.
  • Anyone with eye strain or migraines: Audio provides a break from bright screens and fine print.
  • Auditory and language learners:Hearing words reinforces learning and improves pronunciation.
  • Accessibility professionals:Designers, developers, and content creators use them to test how accessible their work is.

For many, it’s simply about using the right mode of reading for the right moment.

Common concerns—and practical solutions

Here are some concerns and questions that often come up for sighted folks when they first consider trying out a screen reader.

“The controls look complicated.” They can be at first, but you don’t have to learn everything. Start with turning it on/off, making it start and stop reading, and moving forward/back. Build from there.

“What if my device starts reading aloud in public?” Use headphones. Learn the quick mute command (often just pressing Ctrl or a two-finger tap on mobile).

“The voice is too fast.” Adjust the speed to a comfortable pace. You can always increase it later as you get used to it.

“I’m not blind—is it okay to use this?”Absolutely! Accessibility features are built for anyone who can benefit from them.

Getting started with a screen reader

Almost every modern flavor of smartphone, tablet, or computer today comes with a screen reader already pre-installed. So chances are it’s just a matter of turning it on trying it out. Here are some basic commands for the built-in screen readers for the most common devices and operating systems.

Windows: Narrator

Pre-installed on all PCs running Windows 10 or 11.

  • Turn on/off: Press Ctrl + Windows + Enter.
  • Read everything on the page: Caps Lock + M.
  • Stop reading: Press Ctrl.

Microsoft’s complete Narrator user guideincludes detailed instructions and all commands.

Many Windows users also love NVDA, a more full-featured screen reader that’s free to download and easy to install.

macOS: VoiceOver

  • Turn on/off: Press Command + F5.
  • Move forward: Control + Option + Right Arrow.
  • Read from the top: Control + Option + A.

See Apple’s VoiceOver guide for Mac for much more.

iPhone/iPad: VoiceOver

  • Turn on/off: Triple-click the side or Home button.
  • Read the screen: Swipe down with two fingers.

Apple’s iOS VoiceOver guide explains all gestures.

Android: TalkBack

  • Turn on/off: Hold both volume keys for a few seconds (if enabled).
  • Read from the top: Swipe down then right, then select “Read from top.”

See Google’s TalkBack tutorial for many more details.

Chrome OS: ChromeVox

This screen reader comes pre-installed on Chromebooks.

  • Turn on/off: Press Ctrl + Alt + Z.
  • Start reading from the top: PressSearch + Ctrl + Right Arrow.
  • Stop reading: Press Ctrl.
  • Move to the next item: Press Search + Right Arrow.
  • Move to the previous item: PressSearch + Left Arrow.

Google provides a ChromeVox tutorial with more commands and training resources.

A gentle first step: Site Unseen

If you’d like to experience what navigating by structure feels like—but without fully switching to a screen reader—try Site Unseen.

Site Unseen is a Chrome extension that approximates a screen reader byobscuring the visible content of the page and showing details of the currently focused element in a small box at the bottom right of the screen. You navigate with screen reader-like commands—jumping through headings, links, form fields, and more—and can use its “Peek” feature for a brief three-second view of where you are on the page. I wrote a post that offers a deep dive into the what, why, and how of Site Unseen.

It’s not a substitute for a real screen reader, but it’s a great training ground for learning keyboard navigation and understanding how structural elements on a webpage matter.

Everyday scenarios where screen readers help

Here are a several examples of how screen readers can be put to everyday use, whether for accessibility, productivity, or both:

  • Making dinner: Have an article read to you while cooking.
  • Commuting: Let VoiceOver or TalkBack read the news or email while on a bus or train.
  • Tidying up: Listen to a report while folding laundry or cleaning the kitchen.
  • Walking or exercising: Catch up on long blog posts while staying in shape without having to stare at your phone.
  • Research days: Use a screen reader to skim and navigate long documents quickly.
  • Language practice: Hear correct pronunciations in context and follow along visually if you like.
  • Rest days for your eyes: Give your eyes a break from magnification or bright screens.
  • Testing your own work: If you design or publish online content, a quick screen reader check can reveal accessibility issues you’d miss visually.
  • Reading when vision is limited by environment: In low light or glare, listening can be far easier than reading.

Give it a try

You might discover that listening is sometimes more efficient than reading—especially for repetitive or text-heavy work. It’s also an eye-saver. Even if you have perfect vision, switching to audio for part of the day can prevent fatigue.

Learning a screen reader also builds empathy. Navigating your own site or a favorite app without sight gives you a clear sense of what works and what’s frustrating for users with disabilities. For developers and content creators, that insight can directly improve the quality and accessibility of your work.

And there’s a personal bonus: once you’ve built some fluency, you gain flexibility. You can choose to read visually, listen hands-free, or mix the two depending on your needs. For me, that means I can keep working or reading comfortably whether my eyes are fresh or tired.

It doesn’t have to be an all-or-nothing commitment. You can start small—have one article read aloud on your commute, or use Narrator for a quick email scan—and see how it fits. Over time, you may find it becomes an everyday tool, not just an “accessibility feature.”

Acknowledgement

Original sourceThank you to James Warnken for inspiring this post through his interview on the Equal Entry blog, and for sharing his own experiences that challenge assumptions about who uses screen readers.

ChatGPT Glossary: 56 AI Terms Everyone Should Know

AI is rapidly changing the world around us. It’s eliminating jobs and flooding the internet with slop. Thanks to the massive popularity of ChatGPT to Google cramming AI summaries at the top of its search results, AI is completely taking over the internet. With AI, you can get instant answers to pretty much any question. It can feel like talking to someone who has a doctoral degree in everything. 

But that aspect of AI chatbots is only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on the country of origin is cool, but the potential of generative AI could completely reshape economies. That could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence. 

It’s showing up in a dizzying array of products — a short, short list includes Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude and the Perplexity search engine. You can read our reviews and hands-on evaluations of those and other products, along with news, explainers and how-to posts, at our AI Atlas hub.

As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you’re trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know. 

This glossary is regularly updated. 


artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities. 

agentive: Systems or models that exhibit agency with the ability to autonomously pursue actions to achieve a goal. In the context of AI, an agentive model can act without constant supervision, such as an high-level autonomous car. Unlike an “agentic” framework, which is in the background, agentive frameworks are out front, focusing on the user experience. 

AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias. 

AI safety: An interdisciplinary field that’s concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans. 

algorithm: A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own.

alignment: Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions with humans. 

anthropomorphism: When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it’s happy, sad or even sentient altogether. 

artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks.

autonomous agents: An AI model that have the capabilities, programming and other tools to accomplish a specific task. A self-driving car is an autonomous agent, for example, because it has sensory inputs, GPS and driving algorithms to navigate the road on its own. Stanford researchers have shown that autonomous agents can develop their own cultures, traditions and shared language. 

bias: In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes.

chatbot: A program that communicates with humans through text that simulates human language. 

ChatGPT: An AI chatbot developed by OpenAIthat uses large language model technology.

cognitive computing: Another term for artificial intelligence.

data augmentation: Remixing existing data or adding a more diverse set of data to train an AI. 

dataset: A collection of digital information used to train, test and validate an AI model.

deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo.

emergent behavior: When an AI model exhibits unintended abilities. 

end-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It’s not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once. 

ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse and other safety issues. 

foom: Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity.

generative adversarial networks, or GANs: A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it’s authentic.

generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.

Google Gemini: An AI chatbot by Google that functions similarly to ChatGPT but also pulls information from Google’s other services, like Search and Maps. 

guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn’t create disturbing content. 

hallucination: An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren’t entirely known. For example, when asking an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?” it may respond with an incorrect statement saying, “Leonardo da Vinci painted the Mona Lisa in 1815,” which is 300 years after it was actually painted. 

inference: The process AI models use to generate text, images and other content about new data, by inferring from their training data. 

large language model, or LLM: An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language.

latency: The time delay from when an AI system receives an input or prompt and produces an output.

machine learning, or ML: A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content. 

Microsoft Bing: A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It’s similar to Google Gemini in being connected to the internet. 

multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech. 

natural language processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models and linguistic rules.

neural network: A computational model that resembles the human brain’s structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time. 

open weights: When a company releases an open weights model, the final weights of the model — how it interprets information from its training data, including biases — are made publicly available. Open weights models are typically available for download to be run locally on your device. 

overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data, but not new data. 

paperclips: The Paperclip Maximiser theory, coined by philosopher Nick Boström of the University of Oxford, is a hypothetical scenario where an AI system will create as many literal paperclips as possible. In its goal to produce the maximum amount of paperclips, an AI system would hypothetically consume or convert all materials to achieve its goal. This could include dismantling other machinery to produce more paperclips, machinery that could be beneficial to humans. The unintended consequence of this AI system is that it may destroy humanity in its goal to make paperclips.

parameters: Numerical values that give LLMs structure and behavior, enabling it to make predictions.

Perplexity: The name of an AI-powered chatbot and search engine owned by Perplexity AI. It uses a large language model, like those found in other AI chatbots, but has a connection to the open internet for up-to-date results. 

prompt: The suggestion or question you enter into an AI chatbot to get a response. 

prompt chaining: The ability of AI to use information from previous interactions to color future responses. 

prompt engineering: the process of writing prompts for AIs to achieve a desired outcome. It requires detailed instructions, combining chain-of-thought prompting and other techniques, including highly specific text. Prompt engineering can also be used maliciously to force models to behave in ways they weren’t originally intended for. 

quantization: The process by which an AI large learning model is made smaller and more efficient (albeit, slightly less accurate) by lowering its precision from a higher format to a lower format. A good way to think about this is to compare a 16-megapixel image to an 8-megapixel image. Both are still clear and visible, but the higher resolution image will have more detail when you’re zoomed in.

slop: low-quality online content made at high volume by AI to garner views with little labor or effort. The goal with AI slop, in the realm of Google Search and social media, is to flood feeds with so much content that it captures as much ad revenue as possible, usually at the detriment of actual publishers and creators. While some social media sites embrace the influx of AI slop, others are pushing back

stochastic parrot: An analogy of LLMs that illustrates that the software doesn’t have a larger understanding of meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them. 

style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso.

synthetic data: Data created by generative AI that isn’t from the actual world but is trained on real data. It’s used to train mathematical, ML and deep learning models. 

temperature: Parameters set to control how random a language model’s output is. A higher temperature means the model takes more risks. 

text-to-image generation: Creating images based on textual descriptions.

tokens: Small bits of written text that AI language models process to formulate their responses to your prompts. A token is equivalent to four characters in English, or about three-quarters of a word.

training data: The datasets used to help AI models learn, including text, images, code or data.

transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.

Turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine’s ability to behave like a human. The machine passes if a human can’t distinguish the machine’s response from another human. 

unsupervised learning: A form of machine learning where labeled training data isn’t provided to the model and instead the model must identify patterns in data by itself. 

weak AI, aka narrow AI: AI that’s focused on a particular task and can’t learn beyond its skill set. Most of today’s AI is weak AI. 

zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers. 

Wellness Wednesday: Disaster Readiness: What’s Your Plan in an Emergency?

media

This episode of Wellness Wednesday with hosts Beth Gustin, LPC, and Robin Ennis, LCSW, CPC, explores emergency preparedness for natural disasters, with a focus on considerations for blind and low vision individuals. With recent fires in California and extreme cold weather across the U.S., the hosts emphasize the importance of having a plan in place before disaster strikes.

 

Key topics covered include:

• Essential emergency supplies: Medications, non-perishable food, extra clothing, ID documents, and pet supplies (for service animals).

• Mobility and carrying emergency items: Strategies for packing necessities while using a cane, guide dog, or other mobility aids.

• Communication plans: Keeping emergency contact numbers handy, knowing how to identify first responders, and having a backup power source for your phone.

• Emergency planning at home and work: Identifying escape routes, knowing how to reach help quickly, and coordinating with neighbors or family for support.

• Emotional impact: Managing anxiety during emergencies and coping with survivor’s guilt if others are more severely affected.

• Being a resource to others: The value of calmness and preparedness, as blind and low vision individuals often develop strong planning skills out of necessity.

 

The episode encourages listeners to evaluate their current emergency plans, discuss preparedness with loved ones, and share their experiences and questions with the Wellness Wednesday team.