Experts advocate thoughtful regulation for the rapid rise of Generative AI

E27 collated thoughts from Southeast Asia’s AI leaders, discussing Generative AI’s rise, emphasizing responsible innovation, ethical governance, and the urgent need for thoughtful regulation. The following are the excerpts from Angsana Council Members Peng T. Ong and Gullnaz Baig. The compendium of interviews can be accessed here.  

Excerpt from Peng T Ong, Angsana Council Trustees and Managing Partner, Monk’s Hill Ventures

The emergence of Generative AI has significantly surpassed a barrier in creating valuable and diverse content, and it does so at a relatively affordable cost. While it is currently unsuitable for many mission-critical applications, the prospect of getting there may be closer than we think.

I look at it this way––the ‘dog’ is ‘talking’. But just because the dog is talking doesn’t mean we should put it behind the wheel of a truck or at the trading desk of a billion-dollar hedge fund. We don’t know what the dog is ‘thinking’. A straitjacket must be put around the AI networks before we let them touch anything near mission-critical.

My concern is that folks behind this AI boom aren’t thinking sufficiently or thoughtfully about fundamental world engineering requirements before connecting these things to the physical world.

One possibility is for straitjackets to be implemented through computationally tractable algorithms — software for which we can deterministically predict behaviour. Rule-based expert systems will return to vogue, or perhaps using knowledge graphs (data representing knowledge) will become more pervasive.


Excerpt from Gullnaz Baig, Executive Director, Angsana Council

Multidisciplinary collaboration between those with technical prowess and those who understand society is required to build AI that is safe and equitable by design. While this need should be obvious, we should not expect it to come naturally without considerable pressure.

Given the current race to advance in AI development, a multidisciplinary approach, which could slow down the process, is considered cumbersome. Product development sprints do not lend themselves well to the postulations of policy teams. This is as true for the big tech companies racing to get ahead with their own Foundational Models as it is for startups integrating AI into their offerings.

So, we are either left with relying on tech leaders to do the right thing, if they can figure it out, or on states to develop punitive regulations to keep AI development in check.

Yet, regulations, even those as robust as the EU AI Act, are only useful as accountability frameworks. While they are the state’s most vital tool to wield against tech companies, they are also weak. They are often reactive and may struggle to keep pace with the rapid advancements in AI technology. In some cases, regulation kicks in only when the harm has been done.

There is a third option. It enables the state to engage with technologists at a more meaningful level. AI can be developed to check other AIs, ensuring the ecosystem is safe overall. An example is Detect GPT, an AI that helps verify where a text is AI-generated. States should view AI development as an ecosystem. Even as they develop regulations to check risks and harms, they should incentivise the development of AI for safety. National AI strategies should include specific provisions to co-invest in safe AI technology development, seed funding research into AIs to check for discrimination, violation of IP rights, etc., and even provide visa and tax incentives for companies that concentrate on building AI for safety to specifically ensure that on balance the ecosystem is a safe one for all.