Business transformation through generative AI
- Fully unlocking generative AI’s potential requires time and human expertise, according to KPMG.
- Malaysia lags behind neighboring countries in AI adoption.
The rise of generative artificial intelligence (AI) models like ChatGPT and DALL-E has opened up a world of possibilities, offering unprecedented automation capabilities at incredible speed for businesses.
Generative AI has already found numerous applications across various industries. For example, in a recent interview with Chris Street, Managing Director of Data Centres, Asia Pacific, JLL, Tech Wire Asia explored the often-overlooked impact of generative AI on data center demand. As society integrates this technology more deeply into everyday life, the demand for computing power in data centers surges. However, the high server computer density required for AI operations also poses challenges in terms of energy efficiency and sustainability. At the same time, AI and machine learning have the potential to enhance data center performance and enable operation-based technologies like liquid cooling.
Nevertheless, it is important to acknowledge that generative AI is not flawless. According to KPMG, fully unlocking the potential of generative AI in a responsible, trustworthy, and safe manner will require time and human expertise.
A global survey conducted by KPMG, which involved over 17,000 participants from 17 countries known for their AI activity and readiness, revealed that while 82 percent of respondents have heard of AI, three out of five (61 percent) expressed wariness about trusting AI systems. The predominant concern raised by 84 percent of respondents was cybersecurity risks.
Generative AI in action across various business functions
According to Alvin Gan, Head of Technology Consulting at KPMG in Malaysia, generative AI models offer positive benefits across various business functions, including IT, human resources, operations, finance, and more. For instance, these models can contextualize ESG data and support reporting operations, assisting organizations in clearly outlining their ESG initiatives.
Gan emphasized that while generative AI has expanded applications, it is not without risks. Many of these models rely on user-inputted data to improve their underlying algorithms over time. However, this data could be used to generate responses for other users, potentially exposing an organization’s intellectual property or trade secrets. This risk is heightened when employees are not adequately trained in using AI applications, including confidentiality and quality assurance measures.
Data quality and ethics are significant concerns, as the ownership of content processed through generative AI applications remains unclear. The unrestricted use of such applications can expose organizations to intellectual property infringement and broader risks related to fraud, brand reputation, and public perception.
Gan further emphasized that users of generative AI utilize the technology and play a role in its self-learning evolution. This places a significant responsibility on Chief Information Security Officers (CISOs) to shift their focus from simply solving problems to defining them, and to devise new approaches for collaboration between teams and machines. These efforts aim to enhance business efficiency while ensuring compliance with applicable laws and professional standards.
This responsibility extends to various angles, including software developers. The advent of AI has significantly lowered the entry barrier for individuals from diverse backgrounds and experiences. Almost anyone, even with limited existing knowledge, can transform into a junior developer with a clear vision of their goals. This democratization of access can be a positive development, particularly as Singapore’s digital economy depends on fostering a thriving developer community.
However, it’s important to note that generative AI cannot replace the experience and skills of developers. While AI is transforming how developers work – enabling them to be faster, more effective, and happier – developers remain in control and retain ownership of the resulting code.
Bridging the security gap for a digital future
While Malaysia has seen an increase in AI adoption, it still lags behind neighboring countries, as indicated in the Malaysia National Artificial Intelligence Roadmap 2021-2025 (AI Map) released by the Ministry of Science, Technology & Innovation (MOSTI). The AI Map reports that only 16% of Malaysian organizations that have implemented AI have taken steps to ensure the security of their AI applications/systems. Even fewer organizations (10 percent) have developed risk management and cybersecurity policies specifically for AI.
Alvin concluded by stating, “As concerns over security, privacy, data trust, and ethics grow, it’s important to be vigilant and ensure your organization is using AI while upholding digital trust. Organizations need to establish the necessary guardrails for its secure implementation and use in order to maximize the benefits of generative AI, and this includes addressing the potential cybersecurity gap at the Board level.”
It is evident that the rise of generative AI models presents both opportunities and challenges for businesses. While these models offer immense potential for automation and efficiency, there are concerns related to data quality, ethics, intellectual property, and cybersecurity. It is crucial for organizations to approach generative AI with caution, ensuring proper training, data management, and security measures are in place.
- Steering the fleet: DevOps and domain experts on the high seas
- Online food delivery companies in ASEAN running out of recipes for success
- Best memory solution for gamers: KIOXIA EXCERIA PLUS G3 Gen 4 SSD
- Meta’s new AI chatbot is built with Gen Z in mind
- Oracle CloudWorld 2023: Everything you need to know about the Oracle and Microsoft partnership