By Brian Civin
The last five years have witnessed impressive strides in Artificial Intelligence [AI], with ChatGPT’s mainstream success acting as a pivotal moment. Generative AI has flooded the internet with its creations, encompassing images, videos, songs, and articles. Everyday software and platforms, such as Netflix, Apple, Google Maps, and Uber, employ algorithms that shape the media we consume, the products we purchase, and even our daily commute routes.
Beyond the limelight, companies are employing AI for diverse applications like fraud detection, code generation, and personalized marketing. The tech and automotive industries are well ahead in testing autonomous vehicles utilizing machine learning and sophisticated algorithms for safe navigation. And this is just the beginning.
As AI matures, it is increasingly replacing human decision-making in various use cases. For instance, a Snapchat influencer named Caryn Marjorie has created an AI voice bot version of herself to interact with her followers in real time. Additionally, a Chinese tech company, NetDragon WebSoft, has appointed an AI bot named Tang Yu as its CEO.
The Overlooked Risks of Accelerating AI Adoption
However, amidst the excitement, there’s a danger that companies might overlook the risks of AI adoption. Despite AI’s ability to support decision-making, it can’t fully replace human agency and judgement. AI relies on algorithms and data provided by humans, making it imperfect due to challenges like incomplete or biased data.
To shed light on these risks, let’s examine some issues:
Unvalidated data sources: Many generative AI tools rely on the public internet for data, but companies may struggle to verify the quality and accuracy of this information.
Manipulated data: Similar to search engines, AI systems may include data based on popularity, which could be manipulated for nefarious purposes.
Breach of copyright law and privacy regulations: AI-generated content often comprises composites of existing information, raising concerns about copyright infringement and privacy violations.
Potential loss of control over data: Companies might inadvertently surrender proprietary data to commercial AI tools, leading to the exposure of valuable information to competitors.
Undifferentiated business outcomes: If every business uses similar AI systems and datasets, they risk generating similar products and services, reducing innovation.
Trust Your Own Data, People and Partners
To address these risks, companies must not relinquish critical thinking and data governance to automation. They should carefully assess the quality and accuracy of data used for AI decision-making, considering legal, business, and technical perspectives. Clear standards for data sources, trust requirements, data control, and usage should be established, along with a focus on avoiding biased or incomplete datasets.
Organizations can either rely on their in-house data or partner with trusted entities to access accurate and reliable data for AI algorithms. Operating in a closed environment with limited data access to only trusted partners can further mitigate risks. Policies must be in place to guide employee AI usage and ensure explanations and validations for AI decisions.
In conclusion, while AI has achieved remarkable progress, companies must remain cognizant of its potential risks. Trusting in-house data, people, and partners is crucial, and establishing clear standards, data governance, and policies is essential to navigate the challenges responsibly. Striking the right balance between AI capabilities and human judgment will be the key to unlocking business value responsibly.
The author is Chief Sales and Marketing Officer at AfriGIS