Advertisment

Ethical Challenges and Considerations in AI for Indian Organisations

Artificial intelligence (AI) is transforming industries globally, with India poised to lead in innovation. However, ethical challenges such as "Dark AI" and data biases must be addressed.

author-image
CIOL Bureau
Updated On
New Update
image

Artificial intelligence (AI) is very quickly growing and evolving. Truly life-changing use cases of AI are being rolled out at pace, with organisations worldwide leveraging it to enhance customer service, improve efficiency, and free employees from mundane tasks to focus on creative, strategic work.

Advertisment

India, for one, is witnessing an AI-driven economic inflection point. Google CEO Sundar Pichai himself very recently lauded India for being "well-positioned" to shape the future of AI and even noted that the country can potentially lead AI innovation efforts. However, despite the excitement around AI – particularly, generative AI - ethical concerns continue to loom.

With GenAI set to be a heavily adopted technology, businesses must address its tendency to "hallucinate" or produce inaccurate or misleading information. At its core, AI is only as reliable as the data it is trained on. If the data is biased, inaccurate, or discriminatory, outputs will likewise be biased, inaccurate, or discriminatory.

Dark AI: A Threat to Organisations' Credibility

Advertisment

Dark AI is what happens to intelligent systems that are not regulated; it’s similar to what happened to the internet with dark webs. In other words, Dark AI refers to the potential misuse of AI systems, like large language models (LLMs), for malicious purposes. Like the dark web, unregulated and open-source LLMs could be exploited for activities ranging from financial fraud and organised crime to bioterrorism. 

Highly persuasive phishing emails, deep fake videos, automated illicit financial transactions, and media filled with propaganda are just a few examples of the output generated by Dark AI. Nevertheless, amid these challenges lie opportunities to utilize AI to counter these threats and bolster cybersecurity.

With India set to embrace GenAI to enhance operational efficiencies, regulation and consolidation of AI takes centre stage. Organisations, regardless of their size, must be nimble enough to navigate the governance, risk, and compliance landscape successfully.

Advertisment

India's AI Strategy

India’s national strategy on artificial intelligence aims to strike a balance between fostering innovation and mitigating risks. At the Global Technology Summit (GTS) 2023, India’s ministerial representative emphasized the need for policy enablers and guardrails. The Digital Personal Data Protection Act, 2023, which governs the processing of digital personal data, addresses some privacy concerns related to AI platforms.

According to a Nasscom survey, around 60% of Indian companies are either following mature practices related to responsible artificial intelligence (RAI) or have initiated steps towards RAI adoption. The Indian government’s active investment in AI, including a recent sanction of INR 103 billion for AI projects, underscores the importance of developing and using AI responsibly. 

Advertisment

That said, there isn’t specific legislation for Dark AI, albeit there is talk that this might change. The onus, therefore, is also on organisations to ensure they are equipped to mitigate the risks and be prepared when new laws do eventually take effect.

Coming out on Top of India's AI Market

This is a no-brainer, but it needs to be said: organisations must adhere to ethical principles such as fairness, accountability, transparency, privacy, and safety. Mitigating challenges like bias, lack of transparency, and privacy breaches isn't easy but it must be driven by conscious efforts. Organisations should also prioritise having an extensive understanding of AI terminology and risks which can be achieved through comprehensive training on top of considering the security and external impacts of their AI deployments.

Advertisment

Retrieval augmented generation (RAG) incorporates external knowledge sources and proprietary data into the generative process, producing more reliable and credible outputs. This method addresses some of the limitations of traditional AI models, resulting in more accurate and relevant responses. Additionally, organisations should consider using advanced data tools to ensure their AI system datasets are robust and unbiased.

The bottom line is, as Indian organisations continue to adopt AI technologies, they must remain vigilant about the ethical challenges and considerations. By adhering to responsible AI practices and leveraging advanced technologies like RAG, they can move the dial in favour of ethical AI.

Authored By: Mukundha Madhavan, APAC Tech Lead, DataStax