

To me it seems like a thing that sounds kinda cool on paper, but is not actually that useful in practice. We already have the ability to do real time translations or point the camera at something to get more information via AI with our smartphones, but who actually uses that on the regular? It’s just not useful or accurate enough in its current state and having it always available as a HUD isn’t going to change that imo. Being able to point a camera at something and have AI tell me “that’s a red bicycle” is a cool novelty the first few times, but I already knew that information just by looking at it. And if I’m trying to communicate with someone in a foreign language using my phone to translate for me, I’ll just feel like a dork.
And the answer they get will probably be wrong, or at least wrong often enough that you can’t trust it without looking it up yourself. And even if these things do get good enough people will still won’t be using it frequently enough to want to wear a device on their face to do it, when they can already do it better on their phone.