DIGITAL LIFE
Generative AI may cut costs in machine-learning systems, but it increases risks of cyberattacks and data leaks
Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.
"Machine-learning developers need to be aware of the risks of using GenAI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that," says Lones, a computer scientist at Heriot-Watt University in Edinburgh, UK. "Given the current limitations of generative AI, I'd say this is a clear example of just because you can do something doesn't mean you should."
How generative AI is being integrated...Machine-learning systems are algorithms that learn to recognize patterns in data, which they can then use to make predictions and decisions regarding new data. Machine learning has been around for decades, and most people encounter it in their daily lives in the form of spam filters, product recommendations on e-commerce websites, and social media newsfeeds. In the last two or so years, there has been a push to incorporate generative AI (in the form of LLMs) into machine-learning systems, but doing so carries risks and limitations that developers and the general public should be aware of, Lones says.
Lones explores four ways in which generative AI is currently being applied in machine learning: as a component within a machine-learning pipeline, to design and code machine-learning pipelines, to synthesize training data, and to analyze machine-learning outputs. All of these applications carry risks, Lones says, and these risks are compounded if LLMs are used for multiple tasks within a machine-learning system, or if LLMs are "agentic"—meaning they can autonomously use external tools to solve problems.
Complex systems and high‑stakes sectors..."If you have GenAI working in a number of different ways within your machine-learning workflows or system, then they can interact in unpredictable and hard to understand ways," says Lones. "My advice at the moment is to avoid adding too much complexity in terms of how we use GenAI in machine learning, particularly if you're in a sector that has high stakes that impact people's lives and livelihood."
One of the biggest risks is simply that LLMs sometimes make mistakes, bad decisions, and fabricate or "hallucinate" information. Lones says that these errors aren't necessarily predictable and may be difficult to evaluate because LLMs operate in a non-transparent way, which presents an additional issue for legal compliance.
"In areas like medicine or finance, there are laws about being able to show that the machine-learning system is reliable, and that you can explain how it reaches decisions," says Lones. "As soon as you start using LLMs, that gets really hard, because they're so opaque."
Security, privacy, and public awareness...Lones advises machine-learning developers to always manually evaluate LLM-generated code and outputs. He also warns that bigger, remotely hosted LLMs often store and share data, which means that using them opens up opportunities for cybersecurity breaches and the leakage of data and sensitive information.
"It's important for people in the general public to be aware of the limitations of GenAI systems," says Lones. "Companies will deploy these systems to do things like cut costs, and this may improve the experience that end users get, but it may also have negative consequences, such as bias and unfairness."
Generative artificial intelligence (GenAI) can indeed reduce costs in machine learning (ML) systems, but this savings comes with new operational and financial risks. While traditional ML focuses on analysis and prediction, GenAI acts on creation and synthesis, transforming the software development lifecycle and business management.
How generative AI reduces costs... GenAI reduces expenses primarily by automating tasks that previously required skilled human intervention or slow manual processes:
Software and IT development: GenAI tools accelerate workflow by generating repetitive code (boilerplate), creating test scripts, and writing technical documentation. Some companies report reductions of 30% to 45% in development costs.
Data management and R&D: GenAI can synthesize training data, which is crucial when historical data is scarce or protected by privacy, reducing research and development costs by about 10% to 15%.
Customer operations: Advanced chatbots based on LLMs can manage a higher percentage of complex queries without constant human supervision, decreasing the cost per ticket by up to 60%.
Mechanical and structural design: The use of generative AI allows for the optimization of material use, creating lighter and more resistant designs that reduce waste and production costs.
The hidden side of costs and risks...Despite the potential for savings, experts warn of the "cost iceberg" of GenAI(below):
Uncertainty and scale: Computing costs can skyrocket when moving from pilots to production systems, with predictions of a nearly 90% increase in cloud spending between 2023 and 2025 due to GenAI.
Security and privacy risks: The use of LLMs increases the opacity of systems, making it difficult to control sensitive data and opening doors to leaks and cyberattacks.
Continuous maintenance: Unlike traditional software, AI models require constant retraining and monitoring. It is estimated that up to 75% of the resources initially invested need to be maintained for ongoing support to prevent model degradation.
Biases and hallucinations: The lack of transparency in LLMs can introduce biases or "hallucinations" (false information), which generates legal and compliance risks, especially in sensitive sectors such as finance and medicine.
To maximize return on investment (ROI), organizations are adopting strategies such as intelligent model routing (using smaller and cheaper models for simple tasks) and the use of AI gateways to centralize governance and spending control.
Provided by Cell Press
No comments:
Post a Comment