This key part of his plan to create a virtual world could analyze your voice, eyes, and body language. He published this idea 11 days ago. You can read more about it here.
Meta is constructing its personal bold voice assistant for ARVR
Voice assistants are becoming more conversational. They’re able to answer questions about the weather or other things. However, if you ask them something else, such as “is it warmer than it was last week”, then they’ll likely get confused.
Meta wants its voice assistant to understand what you’re saying and give you the right answer. To do this, it needs to know more than your words alone. For example, if you ask it how many people are in a room, it should also know whether you mean people or chairs.
With glasses on your face, you’ll be able to see the world from your perspective. You’ll be able to hear what you hear and more. Your AI will be able to see the same things you do. And it’ll learn how to do this by itself.
A bot that makes you see whatever you ask it to make you see.
Meta is still working on how to make an AI that understands human language. It’s also trying to figure out how to understand people by watching them move or talking to them.
AI within the metaverse will current moral challenges
Privacy concerns and failures haunt Meta and other big tech companies because of their enterprise designs are built around collecting customer information: our shopping histories, interests, private communications, and additional. These issues are even better, privacy experts say, with VR as a result of it could observe far more sensitive data, like our eye actions and facial expressions, and body language.
Eye-tracking data, gaze information, and even sexual arousal can be quantified by an AI. This data can then be used to predict future behavior. Who has access to this data and what are they doing with it? These questions are not yet answered. However, Meta is committed to addressing them.
Zuckerberg stated that the corporate is operating with human rights, civil liberties, and privateness experts to create “systems grounded by fairness, respect, and humanity.” However, since the corporate’s track record for privacy breaches, some expertise ethicists are doubtful. “From a solely scientific standpoint, I am really excited. But because this is Meta, I am afraid,” said Pearlman.
Privacy concerns are valid. We should give people control over their data. But we shouldn’t let companies use AI to discriminate or harass people.