How Ernst & Young’s AI platform is ‘radically’ reshaping operations – Computerworld

This post was originally published on this site.

Many companies have invested resources in cleaning up their unstructured and structured data lakes so it can be used for generating AI responses. Why then do you see fewer and not more investments in data scientists? “Companies are prioritizing AI tools that can automate much of the data preparation and curation process.  The role of the data scientist, over time, will evolve into one that’s more about overseeing these automated processes and ensuring the integrity of the knowledge being generated from the data, rather than manually analyzing or cleaning it. This shift also highlights the growing importance of knowledge engineering over traditional data science roles.
 
“The focus is shifting from manual data analysis to systems that can automatically clean, manage, and analyze data at scale. As AI takes on more of these tasks, the need for traditional data science roles diminishes. Instead, the emphasis is on data architects, knowledge engineering — understanding how to structure, govern, and utilize knowledge in ways that enhance AI’s performance and inform AI agent developers.”

What do you see as the top AI roles emerging as the technology continues to be adopted? “We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs.

“Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills —  they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.”

How will artificial general intelligence (AGI) transform the enterprise long term? “AGI will revolutionize the enterprise in ways we can barely imagine today. Unlike current AI, which is designed for specific tasks, AGI will be capable of performing any intellectual task a human can, which will fundamentally change how businesses operate. AGI has the potential to be a strategic partner in decision-making, innovation, and even customer engagement, shifting the focus from task automation to true collaboration between humans and machines. The long-term impact will be profound, but it’s crucial that AGI is developed and governed responsibly, with strong ethical frameworks in place to ensure it serves the broader good.”

Many believe AGI is the more frightening AI evolution. Do you believe AGI has a place in the enterprise, and can it be trusted or controlled? “I understand the concerns around AGI, but with the right safety controls, I believe it has enormous potential to bring positive change if it’s developed responsibly. AGI will certainly have a place in the enterprise. It will fundamentally transform the way companies achieve outcomes. This technology is driven by goals, outcomes — not by processes. It will disrupt the pillar of process in the enterprise, which will be a game changer.

“For that reason, trust and control will be key. Transparency, accountability, and rigorous governance will be essential in ensuring AGI systems are safe, ethical, and aligned with human values. At EY, we strongly advocate for a human-centered approach to AI, and this will be even more critical with AGI. We need to ensure that it’s not just about the technology, but about how that technology serves the real interests of society, businesses, and individuals alike.”

How do you go about ensuring “a human is at the center” of any AI implementation, especially when you may some day be dealing with AGI? “Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.