When Apple unveiled a new artificial intelligence feature called Apple Intelligence at the Worldwide Developers Conference (WWDC) 2024, Elon Musk expressed security concerns about it.
Musk wrote a post on X, warning, “If Apple integrates OpenAI at the operating system level, my companies will ban Apple devices,” calling it an “unacceptable security violation.”
Apple’s AI feature operates on-device and processes information through a private cloud when necessary.
The on-device method is known to have lower security risks as it processes information on the device itself.
On the other hand, the cloud-based method potentially raises security concerns because it transmits information to a cloud server.
Musk’s concerns stem from the fact that OpenAI is developing a cloud-based AI and is collaborating with Apple.
He argued, “If Apple hands over data to OpenAI, we won’t know what’s happening.” The IT industry is taking a cautious stance on these responses.
It’s still unclear how Apple’s AI feature operates, and it’s too early to criticize.
Apple has stated that it will allow ChatGPT to operate with user consent, which is seen as part of a policy focusing on privacy protection.
Experts believe consumers should wait until clear information about Apple’s AI integration method becomes available.
Future beta versions will reveal whether Musk’s concerns were valid and how Apple will handle their consumers’ personal information.
Most Commented