Microsoft says this single key is the difference between an AI PC and just a PC with AI

What you need to know

  • Intel made several AI announcements at its AI PC Acceleration Program in Taipei, Taiwan this week.
  • During the event, Intel unveiled Microsoft’s definition of an AI PC, which includes some surprising requirements.
  • Microsoft’s definition of an AI PC requires a system to have a CPU, GPU, NPU, to support Copilot, and to have a Copilot key.
  • Notably, some PCs that have already shipped have a CPU, GPU, and NPU but do not have a Copilot key, meaning they do not meet Microsoft’s definition of an AI PC.

Intel shared Microsoft’s definition of an AI PC at its AI PC Acceleration Program in Taipei, Taiwan this week. The chipmaker is one of several industry leaders to push AI hardware. Intel’s Core Ultra processors feature neural processing units (NPUs) specifically optimized for AI. Those chips compete with CPUs from AMD and Qualcomm. But regardless of which company makes the chip in your PC, your device can be an AI PC in the eyes of Microsoft. Tom’s Hardware reported on the event.

Microsoft’s definition of an AI PC was made jointly with Intel, though it opens the doors for AMD, Intel, or Qualcomm devices to carry the designation. According to Microsoft, a computer must have a CPU, GPU, and NPU to be an AI PC. Additionally, supporting Microsoft Copilot and having a Copilot key are requirements. That last item has caused a bit of a stir, since it means that to be labeled an AI PC a system has to have a specific key on a keyboard.

Early this year, Microsoft announced that future PCs will feature a Copilot key. The company called it the “first significant change to the Windows PC keyboard in nearly three decades.” The new Copilot key will be mandatory for OEMs at some point in the near future, but some 2024 PCs lack the key. By strict definition, a computer with all the hardware needed and support for Copilot is not an AI PC unless it has a Copilot key, at least in the eyes of Microsoft.

What is an AI PC?

Microsoft's requirements for an AI PC
Microsoft requires a PC to have a Copilot key to be called an AI PC.(Image credit: Intel)

Apple infamously asked “what’s a computer” in an ad several years ago. In 2024, the question on people’s minds is “what’s an AI PC?”

The saga surrounding the definition of an AI PC reminds me of the famous xkcd comic about competing standards. In that comic there are 14 competing standards, which is considered ridiculous by one of the characters. The obvious solution is to create one universal standard. As you likely guessed, this results in there being 15 competing standards.

With Qualcomm, AMD, and Intel making processors built for AI and Microsoft and other companies working on software, there isn’t a clear definition of an AI PC. Even Intel has its own definition of an AI PC despite announcing a jointly created definition with Microsoft. Intel’s definition is largely the same as the one announced yesterday, but Intel does not require a system to have a Copilot button or Copilot support to be an AI PC.

So, what is an AI PC? It depends on who you ask. Microsoft wants any PC marketed as an AI PC to work with Microsoft software and have a Copilot button. By that definition the $1,600 ASUS ROG Zephyrus G14 with a CPU, GPU, and NPU is not an AI PC because it doesn’t have a Copilot key on its keyboard. Maybe ASUS can send users a key to swap onto the PC to earn the AI PC designation.

All the latest news, reviews, and guides for Windows and Xbox diehards.

Back Tap on iPhone: the forgotten accessibility action

Hand holding an iPhone with caption ” Back Tap on iPhone: the forgotten accessibility action

by Joe Lonergan

A lot was made of Back Tap when it came out a few years ago, but so much has happened since I think it has been passed over by some other features.

Back Tap is an Accessibility action introduced to the iPhone that allows you to double tap or treble tap the back of your iPhone to perform a shortcut.

  1. On your iPhone, go to Settings > Accessibility > Touch.
  2. Scroll down, then select Back Tap.
  3. Select either Double Tap or Triple Tap.
  4. Scroll down to the Shortcuts area, then choose a shortcut.
  5. Tap Back Tap to apply the setting.
  6. Double-tap or triple-tap the back of your phone to run the shortcut.

So why would we use it?

It is situated in the touch area of accessibility settings so we can presume it is there to help people that have trouble interacting with the touch screen.

It can be used for many shortcuts and for launching many applications, read on for some use cases.

If you have an iPhone with no home button you can set Back Tap to launch the app switcher or unlock the phone, these actions can be difficult for some users, and tapping the back of your phone to launch can make it so much easier.

You can use it to launch the magnifier app, there are many occasions throughout the day that you may need to use the magnifier app, and having an action to launch it quickly is good.

You can use Back Tap to invoke a Siri Shortcut. If you have set up some Siri shortcuts in the shortcuts app you will now see them in the list of actions available in the Back Tap section. Siri Shortcut could play your favourite radio station, tell you your current location turn on the lights, and more.

Note, You can add two actions to Back Tap, one to double tap and one to treble tap.

So, if Back Tap is something you have not checked out in a while, revisit it, and see if it could be useful for you. Let us know if you have any great ideas for the feature.

 

BlindShell Hearing Aid Compatibility Update

Dear BlindShell Community,

 

We hope this email finds you well. In response to continuous demand for hearing aid compatibility, we’re excited to share a significant development with you. 

 

Based on the ongoing requests from our valued community members, we’ve taken the initiative to collaborate with a third-party organization (Towson University Hearing &Balance Center) to conduct extensive hearing aid testing. As a result, we’re thrilled to announce that the BlindShell Classic 2 is compatible with leading hearing aid brands such as ReSound, Oticon, and Phonak!

 

Whether you’re a current user of the BlindShell Classic 2 or considering joining our community, this compatibility upgrade opens up new possibilities for enhancing your daily experiences. Now, you can seamlessly integrate your hearing aids with our device to enjoy improved audio clarity and communication.

 

To explore the full list of compatible hearing aids and learn more about integrating them with the BlindShell Classic 2, we invite you to visit our new dedicated landing page by clicking here. There, you’ll find comprehensive information and resources to help you make the most of this exciting compatibility upgrade.

 

Thank you for being a valued member of the BlindShell community. Together, we’re breaking down barriers and creating a more accessible world for everyone. 

 

Best regards,

 

BlindShell USA Team

 

 

 

 

Image removed by sender. icon

 

Free shipping 

all orders over $75

Image removed by sender. icon

 

30 days returns 

moneyback guarantee

Image removed by sender. icon

 

Monthly payments

Buy now, pay later

Image removed by sender. icon

 

USA Tech Support

Supported Hearing AidsMon-Fri 9-5Eastern

Trying Apple’s Personal Voice

Apple recently introduced Personal Voice to newer devices on various hardware in their lineup. I have had a little experience with the basic concept behind this sort of technology from my time at Microsoft where I dabbled with one of Microsoft’s Azure cognitive services to do something similar.

The basic concept behind these experiences is that you record some set of known text and then software converts that into a synthetic version of your voice. In Apple’s case it is 150 phrases ranging from just a few words to maybe at most 20 words in a single phrase.

After you finish recording, there is some processing time and then your voice is ready to use. On an iPhone 15 Pro, my voice was ready in about five hours. You are not able to do anything else with the phone while this is happening. On an M1 MacBook Air from 2020, processing took about two hours and I was able to do other tasks at the same time, such as writing this blog post.

Once your voice is created, you can use it as one of the voices available with Apple’s Live Speechfeature. This allows you to type in various apps where you would typically use your voice and have the synthetic voice used. It compliments the existing voices Apple makes available and has the added benefit of allowing you to have some relationship to your own voice used in the experience. In situations where a person may know that they are going to lose their voice ahead of time, it does offer some ability to preserve your own speech.

Multiple factors influence the quality of the end result here—Microphone, recording environment and more, just to name a few. For short phrases it likely is not noticeable but in my sample, even the pace at which I appear to have read the samples was different. There is a 21 second difference in having the voice read back the same text.

I made two voices in trying this experience. The first was recorded using the default Apple headphones on an iPhone 15 Pro. The second using an Arctis 7 headset. Both samples are my Apple Personal Voice reading my blog post onAccessibility Island.

I have also made a sample of my original voice sample of three phrases and then Apple’s Personal Voice speaking those phrases from my recording with the Arctis 7 device. The Personal Voice speaking the phrases is the result of my typing them into an edit box and asking for them to be spoken using my newly created voice. The phrases are in this recording and have the original voice sample followed immediately by the Personal Voice speaking the phrase. After all three phrases are played, the entire series is duplicated once. The phrases are:

can you call me in an hour

Did you remember to take out the trash?

Is she going to the grocery store now or in the morning?

Creating a personal voice is straight forward. On whatever device you are using, go to Settings:Accessibility:speech:Personal Voice. You’ll be prompted to record a short phrase to test your recording environment and advised of any changes you should make, such as too much background noise. You then start the process of recording 150 phrases. They do not all need to be recorded at once. When you are finished, you’ll be advised to lock your phone if doing this on an iPhone or just ensure your computer is charged if using a Mac.

When the voice is created, you can start using it with Live Speech by going to the same Speech area of Accessibility settings and going into Live speech. Turn Live Speech on and then pick from the list of voices. Your personal Voice should be listed.

If you are doing all of this with VoiceOver, Apple’s screen reader, as I did, the process of creating a voice works well with VoiceOver. You can use VoiceOver to read the phrase to be read, then activate a record button and repeat the phrase. Recording stops when you stop speaking. If you turn on a setting for continuous recording, you will advance to the next phrase automatically and can repeat the process. I did notice that sometimes VoiceOver automatically read the next phrase but not always. Focus seems to go to the Record button and I suspect there is a timing issue between the phrase being spoken and VoiceOver announcing the newly focused button.

Having created two voices, I would say it is probably a good idea to take a short break during the reading of the 150 phrases from time to time. I found myself not speaking as clearly as I wanted once in a while as well as having sort of the same singsong phrasing. Listening to my voice samples and how the voice came out, I would also say the microphone used has a big impact on the voice quality. This isn’t surprising but is made apparent to me comparing the samples of what my recordings sounded like and how that turns out when the same text is spoken by Personal Voice. I don’t think either microphone that I used would be what I would recommend for creating a voice to be used permanently.

I was curious if Apple would allow the personal voice you create to be used with VoiceOver. I didn’t expect it would be possible and that does seem to be the case.

As with pretty much anything in AI, synthetic speech is a rapidly changing technology. There are certainly higher quality voices in the arena of synthesized speech but Apple has done a good job at allowing you to tap your own voice on consumer hardware in an easy to use process. Listening to my own voice, it is clear it isn’t me and I wasn’t expecting it to be. But even on the basic hardware I used, there are characteristics of my voice present and if I were in a situation where I was going to lose my physical voice permanently, this is one option I would definitely explore further.

Generative AI: The World in the Palm of Your Hand

Promotional image for “On TECH & VISION with Dr. Cal Roberts” featuring a purple and blue gradient background, a Wi-Fi symbol, a wireframe globe in a hand, and a portrait of Dr. Cal Roberts in a suit.

When it comes to emerging technology, there’s no hotter topic than artificial intelligence. Programs like ChatGPT and Midjourney are becoming more popular and are inspiring people to explore the possibilities of what AI can achieve – including when it comes to accessible technology for people who are blind or visually impaired.

One of those people is Saqib Shaikh, an engineering manager at Microsoft. Saqib leads a team that developed an app called Seeing AI, which utilizes the latest generation of artificial intelligence, known as generative AI. Dr. Cal, spoke with Saqib about how Generative AI works, his firsthand experience using an app like Seeing AI, and how it helped improve his daily life. 

This episode also features Alice Massa, an occupational therapist at Lighthouse Guild. Alice described the many benefits of generative AI, and how it helps her clients better engage in their world.. 

Saqib and Alice also both agreed that the current state of AI is only the beginning of its potential. They shared their visions of what it could achieve in the future – and it doesn’t seem that far off.

Podcast Transcription

Female AI Voice: The image captures a close up view of a small rocky pool of water inhabited by various starfish. The pool is surrounded by rocks and the water is clear, allowing a detailed view of the starfish and the rocky bottom of the pool the starfish are in a variety of colors and sizes, including some with a reddish orange hue and others that are beige with brown patterns.

The starfish are scattered throughout the pool, some partially submerged in water, while others are fully visible. Their arms are spread out, showcasing their iconic star shape. The rocky bottom of the pool is visible through the clear water.

The rocks are small and appear to be smooth, with colors ranging from dark brown to light beige. There are also larger, irregularly shaped rocks that are reddish brown. There is a yellow sign on a rock on the right side of the image with the text: “But please don’t touch.”

Roberts: What you just heard is the audio description of a starfish tank at an aquarium. Judging from the details, you think it’s a prerecorded clip for an audio tour, but it’s not. This clip was generated on the spot by artificial intelligence or AI.

I’m Doctor Cal Roberts and this is On Tech and Vision. Today’s big idea is technology that can literally put the world in the palm of your hand. I’m talking about the latest generation of artificial intelligence called Generative AI.  It is revolutionizing how people who are blind or visually impaired interact with their environment, not just simple object identification and directions, but enabling users to experience their surroundings fully and engage on a much deeper level.

To learn more about this new evolution, I spoke with Saqib Shaikh, an engineering manager at Microsoft. Saqib has developed an app called Seeing AI and has firsthand experience using AI in his daily life.

It’s a pleasure to welcome to On Tech and Vision Saqib Sheikh. Saqib was the Pisart Award winner at Lighthouse Guild last October in October of 2023 and just wowed us with his vision for the future of AI and how AI will impact the lives of people who are blind and visually impaired. And so it’s just a great honor to welcome Saqib to On Tech and Vision. Welcome.

Shaikh: Thank you so much. It’s an absolute honor to be here, thank you very much.

Roberts: So, our audience is getting familiar with this term, AI, artificial intelligence. People hear it all the time. Now we have a new term for them, Generative AI. Explain. What’s Generative AI?

Shaikh: In many ways, it’s just the latest wave of artificial intelligence. So, artificial intelligence has been around for many decades, and what I started with Seeing AI at Microsoft maybe seven years there was a whole new wave back then of deep learning. It really improved what was possible. And we’re now at this new Generative AI wave. So, in a sense, the term doesn’t matter. You might prefer the term large language model in the press, and again this is a tech word that just means we have really big AI systems which are trained on really large computers and a huge amount of data in a way that was never thought possible before. And the results, which is really what matters is the results, are just remarkable. And this is the technology behind tools like ChatGPT and of course now Seeing AI as well.

Roberts: So compared to past generations of AI, what does the user experience?

Shaikh: We are able to get a lot more details because the AI is able to, like generative sounds, it is able to generate the descriptions or generate language in general from having observed huge amounts of data on the Internet. And so it is able to give the impression of, in the sense of like a chat bot responding like a human, and in the case of describing images, just way in more details than was ever possible before.

Roberts: So this is information as you say on the Internet. How about peoples personal experience? How does Generative AI help someone with the repetitive tasks that they have to function on a daily basis?

Shaikh: We’re still at the beginning of this, but some of the ways are, Microsoft has these tools are called copilots because they sit alongside the person, the human, and as like a copilot they can help you with everyday tasks. And again, it’s just the beginning. But I’m excited to see that they can summarize information or generate reports of interest to someone who’s blind. Maybe there’s a huge amount of data that can be analyzed, summarized, or trend spotted.

And in the case of like Seeing AI, we were talking about the real world with images. It can go from giving a one sentence overview about what’s in an image to giving you a whole paragraph describing intricate details of what’s in the image, again in a way that they never thought even a year or two ago would be possible.

Roberts: So take us through your own personal journey of how you got interested in AI.

Shaikh: So, I’ve been interested in technology since I was a kid. I sort of stumbled onto learning to code when I was in my early teens and I just loved it. I loved this idea that you could have an idea, do some thinking, some typing, and you could make something out of nothing.

And from that, it’s the transformative ability of technology to improve people’s lives. Growing up as a blind kid, I was surrounded by software to generate Braille or to print out the Braille and generate raised diagrams, and so much more. So assistive technology was what enabled me at school, and of course, beyond that as well. 

And so that took me to studying computer science at university and then I specialized in artificial intelligence in my postgraduate and eventually came to Microsoft where I’ve done a whole bunch of things, but it was about seven years ago where we had the opportunity to do a hackathon. The CEO said that there’s a one week period where you can do whatever catches your interest and for me I thought, I want to bring my interest in artificial intelligence and some of the needs I identified individually as a blind person, that’s where it started. 

And we spent a week prototyping some solutions for AI to help blind people and building on this vision of what if there was a visual assistant with you, like when you have a sighted guide who understood you and understood the world around you and told you what was going on, what was interesting to you and could answer your questions. And we’re still quite away from that vision, but we’re closer than we ever have been. 

And so that brought me to the Seeing AI journey which I’ve been on for, like I say, about seven years or so now, which feels like a remarkably long time, but there’s so much exciting things going on, it doesn’t feel that long at all.

Roberts: So for those who don’t know the Seeing AI app, explain it. What does it do and how does someone use it?

Shaikh: Yes, it’s a free mobile app which you can download from the App Store or Play Store and it’s a visual assistant. We sometimes talk about it as the talking camera you just hold up your phone, point it and it will start reading things to you. You could also identify your friends, or describe what’s in our picture or recognize products. And there’s a whole bunch of different tasks it can assist with in one’s daily life.

Most recently, some of the interesting things powered by Generative AI is, as I said, going from taking a photo, say, from your photo gallery to your reliving memories from your vacation, or even just what’s in front of you right now. You can go from saying it’s a man sitting on a chair in a room to actually give you maybe a whole paragraph describing what’s in the room, what’s on the shelf, what’s in the background, what’s through the window, even. And it’s just remarkable. 

I work on this every day. I understand technology, yet as an end user, I still am surprised and delighted by what this generation of AI is capable of telling me.

Roberts: Alice Massa is an occupational therapist. She provides therapy to people who are visually impaired, including how they incorporate vision technology into their lives. And she’s a user of vision technology herself.

Massa: My name is Alice Deluise-Massa. I’m an occupational therapist. I’ve been in practice 50 years. Most people are not always sure what occupational therapy is. My view of occupational therapy is that it’s an empowering profession that looks at people’s strengths, looks at the barriers that are preventing them from doing what they want to do, and offering strategies in order to meet their goals and to empower them to be able to do the things they choose to do.

So I am now working at the Lighthouse and I see clients for occupational therapy and what we do is we do what we call an occupational profile and what that is, is what people’s lives are like, what their strengths are, what their interests are, what they are struggling with doing, what they would like to do, and then we try to match them with strategies, techniques, compensation so that they can do the things they want to do, and that’s where the electronics came into play.

The reason I love my iPhone, and specifically the Seeing AI app, is it’s been developed with really visually impaired people in mind. Seeing AI is so intuitive for somebody who has vision loss. There are a lot of things that you do in a day that other people take for granted. You know, walking in the street, knowing when to cross the street, knowing which street you’re on, going into a store and buying regular milk instead of low fat milk. Those are all the kind of day-to-day things that people really take for granted. 

I was not able to shop alone before I had Seeing AI. Now, I go into TJ Maxx, I pick up a blouse. I can see what size it is. I can see what the price it is. I can see what it’s made of just by putting the tag in front of my phone. It’ll also read the menu at a restaurant.

Roberts: In fact, Seeing AI can do more than just read the menu at a restaurant. It can help you decide what to order. Here’s a recording from Saqib showing how it’s done.

Man: After scanning this restaurant menu in Seeing AI. What are the categories?

AI Voice: The categories in the document are antipasti, salads, pizza with sub categories, red and white specials and dessert.

Man: What Antipasto dishes are there?

AI Voice:  There are three antipasti dishes mentioned in the document. One: oven roasted asparagus with paprika aioli, red onion, tomatoes, arugula and lemon zest. Two: polenta with marinara sauce, granorino, homemade meatballs and herbs. Three: oven roasted castelvetrano olives marinated with chilies, garlic and lemon.

Man: How much does the asparagus antipasti cost?

AI Voice:  The asparagus antipasti costs $12.00.

Roberts: Now we’ll hear Alice tell us more about the incredible things Generative AI can help us do.

Massa: I have a client at the Lighthouse who really was very disinterested in doing anything. The only thing he did on his phone was answer a call from his pastor and call his pastor, and I was able to put Seeing AI on his phone and his wife said the first time in two years she saw a smile on his face because now he could read his Bible by himself.

He just literally had to put the phone in front of the Bible and it would read the page. So for him this was life changing. I call my phone my sister because my phone is the person I go to when I’m on the street. If I’m walking in Manhattan- the other day I was walking, I was meeting someone on 47th St. I wasn’t sure which block I was on. all I did was open Seeing AI short tech, hold it up to the street sign and it told me I was on West 46th St.

So those are the kinds of things that it really makes a difference.

Roberts: As we pointed out in many of our podcast episodes involving the users of technology in its development is vital. Saqib agrees.

Shaikh: I view my work in some ways as a conversation between the community and the scientists. And one of the most fun things about that, like, you know, I just really enjoy hearing from people. What are they using Seeing AI for? Of course, what could it do in the future? But often we’re surprised even by the way these people are using it today because everyone has different tasks and different requirements in their lives. And we are making a set of tools to help as many people as possible. 

A story that comes to mind is Seeing AI can now recognize currency bills, but it did not at the very beginning and we got an e-mail from a group of users who had decided they were going to use the face recognition feature to train Seeing AI to recognize all the presidents and faces on the bank notes like Lincoln or whoever, and you know you’re using that to recognize currency. And I was like, that is so innovative that, you know, our users are taking these tools and they themselves are inventing by finding the ways that it’s going to enable them in their daily lives.

Roberts: And an incredible innovation like this is only scratching the surface of  Generative AI’s capabilities. 

The term multimodal capabilities comes up a lot in discussions about Generative AI. Can you speak more on that?

Shaikh: Yes. So a lot of generative AI began with the concept that you’d have something similar to a chat bot like ChatGPT, where you’re typing text and you’re reading text. Multimodal means what if there were multiple modes of input like audio or a picture, or potentially even video. So of interest to someone who can’t see this idea that you can give Generative AI systems an image and maybe a question about the image or some instructions about what you want to know about the image and you’ll be able to get textual or audio output describing the image, or you’re answering your question that that’s the crux of the multimodal part.

Roberts: And so multimodal can be not only the sources of input, but then it could also be the means of output, so it’s not just everything has to be audio. Our listeners to On Tech and Vision have learned over the years a lot about haptics and about being able to feel data not just to always hear data.

Shaikh: Yeah, absolutely. And today, I don’t know of much work with sort of these multimodal models doing haptics, but why not? In a sense, we’ve seen systems doing music generation and other audio and you know haptics is not that different to audio.

And also generating images. I’m quite excited by this idea that what if someone who could not see can actually generate good looking images. So I think that we are at the cusp of this new feature where in everyday life but also some productivity at work and school we haven’t yet discovered all the possibilities of how this can transform the type of trust that someone who cannot see could independently carry out.

Roberts: So we care a lot about assistive technology for people who are blind. But we also recognize that there are other forms of disability that could benefit from AI, particularly people who can’t hear.

Shaikh: Absolutely. And in recent years, we’ve seen the technology for that audience come a long, long way as well. Speech recognition used to be very hit or miss. But now speech recognition, dictation or identifying who is speaking, these aspects of AI have really become quite accurate. So they can be usable in everyday life and work very fast so that you can even use them in real time. So you’re absolutely right that.

It’s not just one form of disability, and it doesn’t have to be limited to disabilities, though I have this philosophy that people with disabilities are often at the forefront of new technologies because we have the most to gain our the most invested in experimenting with this.

So, if you look at historically, there are so many innovations which were initially created for people with disabilities, whether that be the telephone or iPhone touchscreen, on screen  keyboards, text to speech and speech recognition, and flatbed scanners and talking books, and the list just goes on.

It’s this idea that people with disabilities have the most to gain, and so technologies which later become commonplace are often explored and created in the research phases for people with disabilities. And so if you take that one step further and saying, OK, if we look to the future, what are the needs of people with disabilities that are not being met, that this new wave of AI can label, that’s incredibly exciting. And I have this idea of what I call an assistive agent or an assistive companion. 

So what if  AI could understand you as human. What are your capabilities? What are your limitations at any moment in time, whether that’s due to a disability or your preferences or something else, and understand the environment, the world you’re in or the task you’re doing on the computer or whatever. And then can we use the AI to close that gap and enable everyone to do more and realize their full potential.

So right now, that’s my big vision. And yes, we’re probably still a ways away, but we’re getting closer all the time. So, it’s just this idea that we are all different, every single one of us has our own needs, whether it’s we call it disability or not.

Roberts: Alice Massa also has a vision for the future of AI.

Massa: What I would love is if it could describe an action scene. Let’s say I was at a baseball game and the guy hit the ball and the shortstop is pitching the ball at the third base. I believe that eventually it will be able to describe those kinds of seats. I think that would be ideal and the reason I say that is I have worked with some young people who really do want to socialize by going to sporting events and things like that. So, I always think of Phil Rizzuto, like listening to a ball game as it’s happening right in front of them, almost like a radio announcer.

When I was at the theater the other night, it would have been nice if I could have had some description of the theater. The movies have the auditory description, but that’s done by someone as they watch them because the movie never changes. But in real life things are not scripted, so therefore AI would be able to react to what’s actually happening rather than just be a scripted description of the action.

One of the places in reality, other than in theater entertainment is walking in the street. Last night we were walking on Times Square at night. Talk about having vision problems and feeling a little bit challenged, it would be wonderful if there was like a button I could wear that, as I’m walking in the street it’s sort of saying to me there’s a crowd of five people in front of you. Four people are walking towards you directly in your path. A car is coming on your left. You know, if there was some way that you could use artificial intelligence to cue you so that when you’re moving in a communal area that you would be able to navigate much more easily.

Roberts: Saqib has the same hope. I asked him to tell me more about how Generative AI could someday function as a navigational tool, and much more.

Shaikh: Personally, I imagine having this visual assistant, sometimes think of a friend sitting on my shoulder whispering in my ear. When I’m with friends and family, maybe I wouldn’t need such a thing. We just are together enjoying our company in the moment. But there are always those times when there aren’t other people around, so I really want that thing which does the equivalent to what my friends and family do where they’re telling me that, oh, there’s someone walking towards you who you know. Oh, there’s a new shop just opened up on your left. Or be careful, there’s a cleaning trolley down the corridor.

These are things which are just what a friend or colleague would do without even thinking about it, and I really want the AI to be able to take on that role when there aren’t other people around.

Roberts: So recently, legislators have wanted to talk about the safety of AI, and could AI be dangerous? And could the computer take over for people and they’ve created a large level of fear. Talk to me about this subject. What is the risks associated with AI? Do we need to be establishing curbs on the advancement of technology?

Shaikh: I’m absolutely not an expert in this particular area, but many of my colleagues at Microsoft are working on our responsible AI standards and I’m really glad that policymakers are thinking about this, because it is really important for society as a whole that we do start thinking about what are the potential harms and put the things in place so that, you know, we detect and prevent any problems before they happen. I also think as someone on a personal level, with disability that we should also just make sure that we can also continue to leverage these technologies in the ways that can benefit people. So I’m very happy that people are looking into this and it’s it’s not my area of expertise, but responsible AI and making sure that we innovate in a way that does not cause those unintended consequences is critical.

Roberts: Alice agrees. She thinks there is much more to gain from AI than to fear from it.

Massa: There is a lot of anxiety about AI. Everybody right now is thinking, Oh my God, AI is going to take over the world. But the truth of the matter is, AI really has a lot to offer for people who need new strategies, who need other strategies and that the people who need those strategies are often fearful and I just hope people can get over that fear and recognize the value.

Roberts: As we plunge into this AI driven era, the sheer magnitude of what generative AI can achieve becomes crystal clear. Is not merely interpreting texts or offering navigational support. It’s an ever evolving force that adapts and grows with an expanding data universe. Picture a future where AI not only keeps pace with our needs, but anticipates them constantly pushing the boundaries of innovation. Prepare to be blown away by the endless possibilities.

Did this episode spark ideas for you? Let us know at podcasts@lighthouseguild.org and if you liked this episode, please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.

I’m Doctor Cal Roberts. On Tech and Vision is produced by Lighthouse Guild. For more information visit www.lighthouseguild.org on tech and vision with Doctor Cal Roberts produced at Lighthouse Guild by my colleagues Jane Schmidt and Anne Marie O’hearn. My thanks to Podfly for their production support. 

Blind Abilities – Wellness Wednesday: Taking a Look at Self-Care, What Does Self-Care Feel Like to You?

In this 28th    episode of Wellness Wednesday, Beth Gustin, LPC, Robin Ennis, LCSW, CPC, dive into self-care. Taking care of yourself and recognizing some habits that maybe you may not realize is self-care. Join Beth and Robin and get a look at how they define self-care. 

Check out all the other wednesday wellness episodes at: https://blindabilities.com/?cat=532 

Increasing the Effectiveness of Envision Smart Glasses for the Blind

In today’s world, technology continues to revolutionize the way we live, work, and interact with our surroundings. One groundbreaking innovation that has made a significant impact is Envision smart glasses for the blind. These glasses have the power to change the lives of visually impaired individuals by providing them with increased independence and accessibility. However, like any technology, there is always room for improvement. In this article, we will explore various methods to enhance the effectiveness of Envision smart glasses for the blind.

One crucial aspect to consider is the accuracy and speed of object recognition. Envision smart glasses utilize artificial intelligence and machine learning algorithms to identify and describe objects in the wearer’s environment. By continuously training and updating these algorithms, the glasses can become more adept at recognizing a wider range of objects with greater precision. This improvement would allow users to navigate their surroundings more confidently and efficiently.

Another area of focus is the voice feedback system. Currently, Envision smart glasses provide audio cues to describe objects, people, and text, but there is room for expansion in this area. By integrating natural language processing technology, the glasses could offer more detailed descriptions, including color, size, and texture. Moreover, it would be beneficial to have the ability to answer questions or engage in dialogue, allowing blind individuals to communicate more effectively with their surroundings.

Furthermore, enhancing the accessibility features of Envision smart glasses can significantly increase their effectiveness. Many visually impaired individuals rely on Braille as their primary means of reading and writing. By integrating a Braille display into the glasses, users could read messages or menus directly from the smart glasses. This addition would eliminate the need for carrying separate Braille devices and further streamline the user experience.

Additionally, incorporating a navigation system into the Envision smart glasses would be a game-changer for blind individuals. By utilizing GPS technology, these glasses could help users navigate unfamiliar areas confidently. The glasses could provide turn-by-turn directions, alert users to potential obstacles or hazards, and even identify nearby points of interest. This feature would greatly enhance mobility and improve the overall independence of visually impaired individuals.

Lastly, collaborating with the blind community and obtaining continuous feedback is vital in improving the effectiveness of Envision smart glasses. By actively seeking input and suggestions from users, developers can gain invaluable insights into the challenges faced by visually impaired individuals in their everyday lives. This feedback can drive further innovations and improvements, ensuring that the glasses meet the specific needs and preferences of their target audience.

Apple Documents Apple Vision Pro Accessibility Features: Including VoiceOver and Zoom Guides

Apple has published some information on setting up and using the accessibility features available on their new augmented reality Apple Vision Pro headset. This includes a guide specifically for VoiceOver users and a guide to setting up and using Zoom.

The VoiceOver guide covers how to turn VoiceOver on and off, use the various single and multi-hand gestures to control the headset, practice the gestures, and access tutorials. Some details of note include:

  • VoiceOver reads out descriptions of items in view, battery level, incoming calls, open apps, etc. You can adjust speech rate and pitch.
  • When you open a new app or change your view, VoiceOver plays a sound, then selects and speaks the name of the first item in that app or in your view (typically in the top-left corner).
  • Use single-hand pinches (tapping thumb and finger), modifier pinches (one hand pinch held while pinching with opposite hand), and slide pinches (long pinch plus swipe) to interact with items.
  • Practice gestures without affecting settings using the VoiceOver Practice feature.
  • Turn VoiceOver on or off via Siri voice command, triple clicking the Digital Crown, or in Accessibility settings.
  • Get guided tutorials on how to use VoiceOver when you first enable it or by tapping VoiceOver Tutorial in settings anytime.

The following options are available for Zoom users:

  • Keyboard Shortcuts: Control Zoom using shortcuts on a Magic Keyboard.
  • Zoom Controller: Turn the controller, change the zoom region, or set zoom level.
  • Zoom Region: Choose Full Screen Zoom or Window Zoom.
  • Use Crown to Zoom: Turn on or off.
  • Maximum Zoom Level: Drag the slider to adjust the level.

When using Zoom, to see more of your view, do any of the following:

  • Adjust the magnification: Use the Digital Crown.
  • Move the Zoom lens: (Window Zoom) Drag the window bar at the bottom of the Zoom lens.
  • Pan to another area: (Full Screen Zoom) Turn your head.
  • Tap the Zoom Controller (if you turned it on) to see the options for zoom level and more.

Given Apple’s strong track record on accessibility, it should be no surprise that they have implemented what sounds like a solid set of features on the Vision Pro from day one for blind and low vision users. However, the real test will come from hands-on usage.

If you have purchased a Vision Pro or taken advantage of the in-store demos now available, please share your experiences using the headset with VoiceOver or Zoom. We would love to hear your opinions. Did it live up to your expectations? What impressed or disappointed you? Your insights will help guide others considering purchasing the Apple Vision Pro.

Tags

What can you connect to the iPhone 15 with USB-C?

On the iPhone 15 and iPhone 15 Plus, Apple finally got rid of the Lightning
port and replaced it with USB-C. This opens up a whole new world of
convenience of connecting accessories and peripherals. The iPhone 15 Pro
and iPhone 15 Pro Max go one step further, as the USB-C port on the pro
models supports USB-3 speeds, enabling up to 10 Gbps data transfer speeds.

USB-C makes it easy to connect displays, external storage, cameras, and
charge using the same cable as all your other devices. Here???s what you can
do with your new iPhone and its shiny new port ???

Why did Apple change from Lightning?
Apple was fully invested in Lightning port ecosystem on the iPhone.
However, it began adopting USB-C across Mac and iPad models since 2015 as a
way to modernize PC connectivity. USB-C is versatile, carrying power and
data with the same connector. And USB-4 / Thunderbolt offers even higher
data transfer speeds.

At the same time, USB-C was becoming increasingly popular on the Android
side of the smartphone market. But Apple stuck with Lightning on iPhone,
partly because they were still stung from the backlash of 30-pin ->
Lightning transition that happened more than a decade ago.

The forcing factor for Apple was the EU commission, who have passed
legislation to reduce e-waste by enforcing USB-C as the universal connector
for phones. That would mean Apple would at least have to make a special
model of iPhone for the European market. That brings its own complications.
Instead, Apple chose to just adopt USB-C across the board ??? and apart from
the frustration of customers having to change out all their existing
now-defunct Lightning cables, that brings a lot of benefits ???

Universal USB-C Charging
Now that you have an iPhone 15, you are probably close to being able to use
the same single cable to charge all of your devices. Almost all iPads have
a USB-C port, and all Macs have been sporting USB-C for a decade. The
newest Macs come with Apple???s MagSafe charger, but they can also be charged
by plugging in to one of their USB-C ports too. The latest-generation Apple
TV comes with a USB-C remote, and Apple just released a new AirPods Pro
with a USB-C charging case. You can also now buy just the USB-C case
separately from the Apple Store.

Apple sells its own USB-C charge cables, and the iPhone comes with a nice
quality woven cable in the box itself. Unlike Lightning, USB-C is not
proprietary. To charge your Apple gear, you can use any USB-C charging
cable or accessory on the market. That includes cheaper USB-C cables, and
USB-C cables of different styles and lengths. Anker has a very convenient
dual 47 watt USB-C power adapter, for example.

To charge your iPhone at full speed, you want to use a 20-watt or 30-watt
power adapter. You can safely use a higher wattage power adapter like the
one that comes with your MacBook; the phone will automatically manage the
power.

Charge other devices with your iPhone 15 battery
Another cool angle of the USB-C charging story is that you can now use the
iPhone to charge other accessories. The iPhone 15 can accept up to 27 watts
through the USB-C port, but itself can also output about 4.5 watts.
(Lightning could only manage a measly 0.3watts.)
This means you can actually use your spare iPhone battery to power and
charge connected accessories. If your AirPods are running low, in a pinch,
you can plug them into your phone and your phone will top up the juice in
your AirPods case. You could even top up the battery of another person???s
iPhone ??? but just be aware it will take a while.

Easily connect to external displays
With Lightning, you could only connect to external displays via a clunky
dongle. With USB-C on iPhone 15, you can output to a screen directly. Use a
USB-C to HDMI cable and show your iPhone screen on a 4K TV or monitor, with
screen mirroring. If you are watching a movie in a video streaming service
app, the film will automatically fill the TV dimensions with playback
controls on the touch screen. Easily show off your photo library or conduct
a presentation with Keynote by just plugging your phone directly into a
projector.

One of the use cases Apple is pushing is positioning iPhone as a portable
games console. Using the power of the A17 Pro chip inside, iPhone 15 Pro
can run some top-end games with high graphical fidelity. Pair an Xbox or
PlayStation game controller, connect your phone to a TV, and start gaming.
The Apple USB-C Digital AV adapter is a good choice for this use case, as
it allows for HDMI, power output for continuous charging and passthrough
USB-C in one convenient dongle.

Expand your local storage with portable USB-C hard drives and SD cards
Connect a USB-C hard drive or SD card reader and browse files directly
using the Files app on iPhone. If you go on holiday and take a dedicated
camera with you, you can import photos directly from the SD card into your
iPhone???s photo library. Then, empty the SD card and keep shooting while you
edit and upload the pictures you???ve taken on your phone???s big touchscreen.

The iPhone 15 Pro Camera app allows you to record ProRes 4K at 60 FPS video
directly to USB-3 storage. ProRes is uncompressed and the file sizes are
huge, consuming gigabytes per minute of recording. But pairing your iPhone
with a 1 TB portable SSD, you can record for a while before running out of
space. This requires a USB-3 cable (not simply charge cable) to take
advantage of the fast 10Gbps transfer speeds possible through the iPhone 15
Pro???s port.

Connect hardware keyboards and wired internet
Sometimes, wireless doesn???t cut it ??? and you just need to plug in. USB-C on
iPhone supports almost any basic accessory you can think of, without the
need for special drivers or additional configuration. It???s all plug and
play. That means you can connect a hardware keyboard will just work.
Or if you are podcasting on the road, you can record using a mini USB-C
microphone. Another possibility is wired internet, using a USB-C to
Ethernet adapter. And if you have a MIDI keyboard, you can make a song on
the go with an app like GarageBand.

Wired headphones that work with your phone and computer
Since Apple dropped the headphone jack, much of the world has moved on to
using Bluetooth wireless headphones like AirPods. But if you still prefer
to plug in with a wire, using wired headphones with iPhone has been an
inconvenient affair. Lightning earbuds worked with your phone, but nothing
else.
Now, it???s all much simpler. iPhone 15 works with any USB-C headphones,
including Apple???s new USB-C EarPods. These headphones will work with any
computer with a USB-C port, so you can finally use the same wired
headphones across your phone, iPad and Mac. An increasing number of
high-end headphones and speakers also rely on USB-C connectivity these days.

Connect multi-port hubs
USB-C is very flexible and allows for data and power to travel over the
same port. This means you can carry a single dongle, which is a multi-port
hub comprising all sorts of IO. These are super convenient as iPad and Mac
accessories, and now work with the iPhone too.
For example, this $25 Anker 5-in-1 hub will work with the USB-3 USB-C port
on iPhone 15 Pro and enable you to connect to external displays, connect to
power, plug in USB accessories and an external display all at the same time.
The only thing you have to keep in mind is that the iPhone can output power
to the hub at 4.5 watts. iPad and Mac can offer much higher bus-powered
wattage, so if you are using them with the iPhone, you may need to also
plug in an external power source.
Some of these capabilities were previously possible with Lightning but
necessitated large, clunky, and special purpose adapters for each use case.
USB-C is standards-based which means anything you buy will also work with
other devices too. You don???t need to carry a special Lightning SD card
reader anymore; the same USB-C peripheral will work just as well with your
iPhone, iPad, Mac and any Windows PC or Android devices in the family as
well. The universal connector life is finally here.

Original Article at:
https://9to5mac.com/2023/12/17/iphone-15-usb-c-connect-accessories/

Ringing in the Ears, Tinnitus and Other Hearing Conditions You Should Know About – CNET

Picture this: you’re sitting at your desk at your office and all of the sudden you lose hearing in one of your ears for a few seconds before everything returns to normal. Or maybe you hear a ringing in your ear on and off for a few days, and then it disappears. 

When most people think of ear issues, they think of ear infections, hearing loss and deafness. However, despite being one of the smallest structures in your body, ears are incredibly complex and subject to many complications. 

So, is that ringing in your ear normal or a sign of something more serious? Here are five ear problems you may not know about, and what to do if you suspect you have an ear disorder.  

Read more: 7 Unexpected Things That May Be Hurting Your Hearing

You temporarily lose hearing in one or both of your ears

What it is: Acute noise-induced hearing loss or obstructive hearing loss.

If you’ve ever been to a loud concert, you may have experienced noise-induced hearing loss before. Acute noise-induced hearing loss occurs when you temporarily lose your hearing in response to exposure to loud noise. It can make environmental sounds seem muffled or make conversations sound stifled. Though temporary on its own, repeatedly experiencing acute noise-induced hearing loss can lead to permanent hearing loss in the long run. 

Obstructive, or conductive, hearing loss occurs when something physically blocks sound from reaching your hearing structures. This can happen when you have too much ear wax built up in your ears, if you have a foreign object lodged in your ear, or if you have some sort of injury to your inner, middle or outer ear. If you have this type of hearing loss, you may also experience pain or a feeling of fullness in your ear.

Apple presentation showing noise detection feature on the Apple Watch
Exposure to loud sounds can temporarily reduce your ability to hear. New technology, like the Noise app on Apple Watch, can detect a decibel level at which you are given a warning.Apple

You hear ringing in your ears that comes and goes

What it is: Tinnitus.

Tinnitus refers to the perception of noise or ringing in your ears. Perception is the key word, because you aren’t actually hearing a real sound when you experience tinnitus. Often a symptom of other ear disorders, tinnitus is not a condition on its own. It can signify age-related hearing loss, an ear injury or infection, a circulatory system disorder or something else. 

You may hear phantom noises other than ringing, including buzzing, clicking, roaring, humming or hissing. The volume level of the phantom noises may vary or stay constant, and the noises may come and go completely. 

You’re profoundly annoyed hearing your coworker eating a snack 

What it is: Misophonia.

This disorder involves an emotional response to sounds that don’t truly disturb most people, such as chewing, breathing and tapping. Everyone occasionally gets annoyed by repetitive sounds, but people with misophonia experience an upsetting emotional reactionthat often includes rage and resentment. They may think that others are intentionally making sounds to upset them. 

People with misophonia may act out in response to a noise that someone else is making and realize later that their reaction was extreme or inappropriate. If you often experience intense feelings — remember, more than just slight annoyance —  in response to various sounds, you may want to talk to a doctor. 

woman with brown curly hair eating a pink apple outside
If the sound of someone eating something crunchy, like an apple, distresses you to the point of anger or resentment, you may need to talk to a doctor about misophonia. Getty Images

You hear ringing in your ears and experience vertigo 

What it is: Ménière’s disease.

This disorder of the inner ear is characterized by tinnitus and bouts of vertigo (dizziness), and it can contribute to progressive hearing loss. Ménière’s disease is considered a chronic condition, and doctors don’t yet know the exact cause. However, fluid buildup and problems with fluid drainage seem to be a contributing factor. In most cases, Ménière’s disease only affects one ear and can cause a feeling of fullness in the affected ear, a symptom referred to as aural fullness. 

You can hear your own pulse, and it’s incessant

What it is: Rhythmic tinnitus.

Also called pulsatile tinnitus, this is a rare form of regular tinnitus. Rhythmic tinnitus, in contrast to regular tinnitus, occurs in response to a physical sound — that of your blood circulating through your arteries. Doctors sometimes call this condition “objective tinnitus” because they can hear the sound, whereas regular tinnitus is subjective because only the patient can hear it.

If you have rhythmic tinnitus, you might notice that the pitch of the sound correlates to your pulse. You may also feel like you can never escape the sound, especially when lying down or pressing your ear to something.

What should I do if I think I have an ear disorder? 

If you suspect something is wrong with your ears or hearing, skip the Google-fest and see a doctor as soon as you can. Hearing disorders, especially hearing loss, can creep up slowly, showing no symptoms until the disorder has progressed to a difficult state. 

In most cases, you’ll need to see an otolaryngologist or audiologist for ear and hearing disorders. If you go to your primary care doctor, you’ll likely get referred out to one of those ear specialists. If you have an ear disorder that involves or has induced sensorineural hearing loss, ask your doctor about hearing aids, assistive listening devices or cochlear implants.