Tuesday, March 19, 2024

Trusting AI is Impossible without Trustworthy People

The definition of “true” artificial intelligence (AI) has now become a dangerous proposition with companies like OpenAI, Google’s DeepMind, and Facebook Artificial Intelligence Research (FAIR) working towards creating artificial general intelligence.

Artificial general intelligence colloquially referred to as superintelligence can be defined as AI improving itself through the process of iterative learning, reaching the point of singularity, quickly surpassing the limits of human intelligence. Regardless of where the company stands, AI is a dangerous technology — and some might go as far as calling it a weapon. To give a few examples, deepfakes utilize AI to perform realistic face swaps to create unbelievable illusions. Deepfakes are also used to create fake audio to impersonate others, the potential dangers it combining with fake videos, become enormous.

Besides video and audio, AI can also generate highly plausible fake text, as in the case of an unreleased OpenAI algorithm. It also creates generative adversarial networks used to create visual art or poetry.

AI being used for morally corrupt purposes is a highly deliberate, focused effort, as AI is a complex tool that doesn’t act on its own. The creation and deployment of any AI at scale require data engineers, data analysts, and data scientists, who create the environments and infrastructure to build, test, train, and deploy AI models. For firms, this poses a higher risk as if it is non-trustworthy employees accessing the AI, the chances of frauds are multiplied.

There is always a risk of data scientists creating models with AI deepfakes. The number of potential dangers remains the same for the creators as well as the employees handling the AI application. The benefit and risk associated with AI would be drastically exacerbated by artificial general intelligence, which is why it’s so necessary for firms to run an ethic test on creators and users of AI.

To trust AI, firms should first trust the data scientists. AI is now guided by expert organizations that work for connecting industry problem statements to thousands of trustworthy data scientists seeking meaningful work, experience, and networking opportunities. For instance, both Microsoft and Google have launched their AI-for-good initiatives for industry and society’s overall security.

Maybe it’s impossible to stop anonymous developers from creating “AI-for-bad” projects, but firms can definitely focus on creating more AI for good projects. This means higher stakeholder collaboration, requiring companies not only to consider implementing AI to solve problem statements but also verifying the history of data scientists who work on those problem statements.

Ultimately, firms should not trust AI or data scientists completely. Environments facilitating trust, such as open ecosystems for creating AI solutions, need to become a cross-industry standard.

Top 5 This Week

spot_img

Related Posts

spot_img
Debjani Chaudhury
Debjani Chaudhury
Debjani Chaudhury works as an Associate Editor with OnDot Media. In this capacity, she contributes editorial articles for two platforms, focusing on the latest global technology and trends.Debjani is a seasoned Content Developer who comes with 3 years of experience with Fashion, IT, and International Marketing industries. She has represented India in International trade forums like Hannover Messe, Germany.

Popular Articles