Shocking Gap Revealed: Just 2% of Companies Live Up to Responsible AI Standards
- Hanaa Siddiqi
- Aug 20
- 3 min read

Most companies are falling short of Responsible Artificial Intelligence standards. These standards aim to demonstrate that AI systems are being developed and deployed in an ethical, safe, and transparent manner. A new study from the Infosys Knowledge Institute reveals the scale of the issue. The research is based on a survey of 1,500 respondents representing businesses with annual revenues above one billion dollars, spread across seven countries and 14 industries.
The findings are concerning. Ninety percent of companies have already experienced some form of AI-related incident, whether environmental, safety-related, or ethical. Even more troubling is the lack of action. Only 38 per cent of businesses are actively working to reduce the environmental footprint of their AI systems.
The report warns that ignoring Responsible AI standards within broader environmental, social, and governance strategies creates a blind spot that carries both financial and reputational risk. Sustainability, the authors argue, is no longer just about cutting carbon emissions. It now extends into computation. As AI models become larger, their energy requirements rise rapidly. Yet fewer than four in ten businesses are measuring emissions, investing in energy-efficient deployment, or shifting toward sustainable cloud infrastructure.
The governance challenge is just as urgent. Three out of four companies reported financial losses resulting from the poorly implemented use of AI, and 86 per cent of executives anticipate new compliance challenges as advanced systems become more widespread. On the other hand, firms that have invested in Responsible AI leadership are already seeing the upside. They face incident costs that are 39 per cent lower and deal with far fewer severe risks. For these businesses, Responsible AI has become a source of resilience and growth rather than just another compliance box to tick.
Europe has taken the lead with the AI Act. This is the first comprehensive legal framework for artificial intelligence globally. The European Commission describes the law as a safeguard to ensure AI remains trustworthy, human-centric, and aligned with values such as safety, democracy, and fundamental rights.
Implementation began in August 2024. By February 2025, the first enforcement deadline was in place, banning prohibited practices like scraping facial images from the internet or CCTV feeds. As of 2 August 2025, the law will be extended to cover general-purpose AI models that are considered to pose systemic risks. This includes the flagship systems developed by OpenAI, Google, Meta, and Anthropic. New entrants must comply immediately, while existing providers have until August 2027 to fully meet the requirements.
The penalties are steep. Companies that breach bans on prohibited AI uses can face fines of up to € 35 million or 7% of their global turnover. Violations related to general-purpose models can result in penalties of up to €15 million or 3% of the company's annual turnover. Google has announced its support for the EU’s voluntary code of practice for general-purpose AI. Even so, critics argue that strict rules could slow Europe’s competitiveness in AI innovation.
Some experts believe the Act still falls short of its goals. Their primary concern is that environmental responsibility is not given enough weight. Training and operating large AI models require vast amounts of energy, yet sustainability provisions remain secondary in the legislation.
New data from BloombergNEF backs this concern. The research suggests that surging AI demand will push global data centre power requirements to double by 2050, accounting for nearly nine percent of global energy use.





![LOGOTYPE [GREEN_DARK GREEN].png](https://static.wixstatic.com/media/d6e0b6_7c15be730f2c42d4ad22da5f1e69fa35~mv2.png/v1/fill/w_877,h_198,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/LOGOTYPE%20%5BGREEN_DARK%20GREEN%5D.png)



Comments