LP Magazine EU

Retail-Ad1.gif

20250325.TLP.LMP_EU_Box_Banners__Demo_Webinars_Both.png

EMEA-Retail-Webinar-Banner-4_1.jpg

 

November_2024.png

BodyWorn_300x250_2405.jpg

 

300x250_December_2024.gif

UK_Banner_ad_5-01.png

Web and mobile fraud

The Art of Artificial Intelligence Risk Management

High Stakes on the High Street as Retailers Enter the Arms Race towards Regulation Tackling “Rise of the Machine” Scams

When the 20th century Austrian philosopher Robert Musil coined the phrase “progress would be wonderful—if only it would stop” in the 1930s, he could not have known how prescient his warning from history would become, particularly when you consider the idiom in relation to the limitless learning capacity of generative artificial intelligence (AI).

He could not have known that the relentless thirst for human knowledge would trigger the rise of machine learning and its pivotal role in the third industrial revolution in the same way that James Arkwright’s “Spinning Jenny” created an automated technology that would transform lives, mindsets, and geographical landscapes from bucolic rural settings of fields, flora, and fauna to smog-laden cities, choking chimneys, and ugly urban sprawls, all in the name of progress.

In short, generative AI is a gamechanger. Whereas conventional AI has the capacity to perform basic repetitive tasks that typically require human intelligence, generative AI uses deep-learning models that can almost think independently to produce high-quality text, images, and other content based on the rich seam of human-generated data they were trained on.

AI has gone through many cycles of hype, but the advent of ChatGPT seems to have become the point of divergence. 

OpenAI’s chatbot, powered by its latest large language model, showed the world its ability to produce and, depending upon the legal position relating to IP or copyright laws, plagiarise poems. And, to the concern of university lecturers the world over, churn out essays that look like a human created them. 

Like the progress we would like to stop in order to take a breath, the technology has moved faster than the regulation to control it. As health and safety regulation became the unwanted offspring of the first industrial revolution, rules-based legal and compliance frameworks are constantly playing catch-up with AI technology and the insatiable appetite of business and Government to own and optimise its potential against the backdrop of a global technology arms race.

Have We Got the Energy for AI?

In January, President Donald Trump wasted little time in the first few days of his new premiership in announcing the Stargate Initiative, a $500 billion private sector deal to expand US artificial intelligence infrastructure. Spearheaded by tech giants OpenAI, SoftBank, and Oracle, Stargate represents the largest single AI project in history. 

However, within the same month DeepSeek China’s new AI kid on the block wiped $1 trillion off the value of US stocks in Nvidia, the US’s largest chip maker—the biggest one day fall in stock market history—in what President Trump called a “wake-up call” for Silicon Valley. 

Such is the transformative speed of AI, the Chinese debut AI is claimed to work faster, smarter, and at a fraction of the price of ChatGPT, but it also uses less energy to power it, which is the other worrying global impact of AI’s warp speed roll-out.

Research from Goldman Sachs suggests data centre power demand to generate AI will grow 160 per cent by 2030. In context, it also states that a single ChatGPT search uses ten times the computing power of a traditional Google search.  

According to Forbes Magazine Stargate’s data centre energy requirement puts the emerging technology on a collision course with measures to tackle the climate emergency. The article, Stargate’s $500 Billion AI Bet: Have We Forgotten The Hidden Cost?, said:

“By 2028, data centres are expected to consume as much as 12 per cent of the US’s total electricity demand, more than double their share today. Ambitious projects like Stargate could reshape the nation’s energy consumption and environmental footprint. As we are faced with increasingly frequent extreme weather and natural disasters, managing the environmental impact of data centres should take priority. Will Stargate lead us to a sustainable future, or will it lock us into an era of escalating energy demand and ecological strain?”

Retail and the Battle to Control AI’s Avarice 

Retail is one of the first to the party when it comes to harnessing the potential of AI. The industry recognises the efficiencies that AI can deliver in terms of carrying out repetitive tasks but also to provide intelligence. Self-checkouts (SCOs) widely adopted across the supermarket and grocery sector already have the capability to use AI to leverage advanced algorithms and data analysis, including identifying patterns, anomalies, and suspicious behaviours, making it easier to detect potential cases of shrinkage. 

By analysing large volumes of data collected from various sources, including point-of-sale systems, security cameras, and electronic article surveillance (EAS) tags, AI systems can quickly identify potential cases of theft or fraud.

Using AI to Defeat AI scams

Using AI to fight AI-type scams is another area of growth for the retail sector. In Loss Prevention Magazine Europe’s autumn 2023 article “Buy, Lie, and AI”, voice authentication specialist Pindrop talked about the findings of its intelligence and safety report chillingly titled “The Fraudster’s Strike Back”. 

In it Pindrop said: “Following recent economic changes, fraudsters have shifted focus away from Government pay-outs and back to their traditional targets—contact centres”. 

“Today’s fraudsters are also armed with new tactics, including the use of personal user data available on the dark web, advancements in artificial intelligence (AI) for creating synthetic audio, and an increased willingness to work in teams. This has led to a 40 per cent increase in fraud rates on contact centres”.

Such call centres are used by retailers for the purposes of customer service and transactions, a significant issue for businesses in terms of deep fake voice duplication and the risk of losses in cash and trust, as well as the reputational damage flowing from such penetrative attacks.

The company, which is in the business of detecting AI-generated synthetic voices, said: “The media presents AI as a new problem, but our whole business since around 2011 has been predicated on authenticating voices simply because of the necessity for trust—we all want to know who we are speaking to”.

Which? AI Scam Report

A 2024 survey of more than one thousand members of consumer watchdog Which? found that one in six people who placed an online order received items that did not match the product description or the image. This was the result of the use of realistic fake videos generated by AI on content-hungry platforms such as TikTok, Facebook, and Instagram. 

Here Which? found evidence of scammers manipulating a real video of someone from their genuine social media accounts, including celebrities, and subtly changing the way their face and body move, and even adding a different voice. More sophisticated software is capable of creating entirely original footage using a combination of images found online, stitched together to create a person who doesn’t really exist.

Entire websites including images, text, videos, stores, and checkouts, can now be created quickly using online tools that just need to be given a few prompts about themes. Another form of AI, known as bots, can also fake comments and likes on websites and social media pages from non-existent followers and fans, which can make it look like a fake company has lots of loyal and happy customers.

AI tools can easily generate content for fake listings on online marketplaces. Scam listings commonly feature high-value or in-demand products at temptingly low prices, such as new and used tech like mobile phones, smartwatches, and tablets, as well as children’s toys, bikes, and household appliances. 

To this end, retailers are investing in more robust cyber-security measures such as AI-driven fraud detection tools and employee training on identifying AI-enabled scams, as once the fraud has happened, social media sites can be slow to take remedial action. 

Seasonal events such as Black Friday and Cyber Monday are also prone to AI-targeted fraud. The National Cyber Security Centre reported that shoppers lost over £11 million to a wide range of shopping scams in the lead-up to Christmas last year.

Algorithms with Attitude

But fraud is not the only threat from AI. Human prejudices also play a part in creating flawed outcomes from the use of AI. According to 17th century poet and philosopher Alexander Pope, “To err is human”. In modern parlance this means AI’s mimicking of the behaviour and bias of those who programme it in turn builds flawed algorithms with human bias.

Many retailers rely on AI for supply chain management and automate pricing strategies using predictive analytics to anticipate demand and optimise stock levels during peak periods like Black Friday. 

While product demand and on-shelf availability as well as dynamic pricing can maximise sales, poor calibration of AI algorithms may lead to unintended consequences, such as over or under supply, pricing errors or a loss of consumer trust and significant revenue loss, while disproportionately high pricing may drive customers away or even discriminate against certain consumer groups. 

While advertising standards and consumer laws protect shoppers from misleading pricing and hidden costs—the Competition and Markets Authority (CMA) is currently investigating the fairness of dynamic pricing in the wake of the scandal surrounding the over-inflated price of the Oasis reunion tickets—the law has a long way to go to keep up with AI scams or flawed machine learning.

The EU AI Law

Meanwhile, Governments and regulatory bodies are introducing new rules to address ethical, security, and privacy risks associated with AI. Keeping pace as the technology morphs and multiplies from its almost inexhaustible machine-learning sources, is and will always be the ongoing challenge. 

KPMG and the University of Queensland’s “Trust in Artificial Intelligence: A Global Study” found that three in five people express wariness about AI systems, leading 71 percent to expect regulatory measures.

In response, the European Union (EU) has made significant strides with a provisional agreement on the groundbreaking Artificial Intelligence Act (AI Act), which is anticipated to set a new global standard for AI regulation. 

Envisioned to become law in 2025, with most AI systems needing to comply by 2026, the AI Act takes a risk-based approach to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability.

The AI Act will provide the first-ever legal framework on AI, which addresses the risks of the technology and positions Europe to play a leading role globally.

The Act aims to strike a delicate balance, fostering the emerging technology’s adoption while upholding individuals’ rights to responsible, ethical, and trustworthy AI use. 

The legal perspective is critical to controlling the borders between human and artificial intelligence, particularly in the UK which, since Britian opted to leave the UK under its protracted and complex Brexit agreement, will mean potentially fewer protections against the excesses of the new technology. 

“AI technology in itself is not the issue, it is the huge potential for its misuse that provides the need for human oversight through the law,” said Racheal Muldoon, a partner in the Financial Services and Funds team at law firm Charles Russell Speechlys.

“In England and Wales there is no one piece of legislation on the emerging impacts of AI which is in stark contrast to Europe. In that respect we are very much reactive to the impact of AI and potentially playing a game of cat and mouse,” she said. 

“Here, we have a principles-based approach to legally controlling AI whereas the new EU AI Act is rules-based.”

“This means that in the UK, unlike Europe, we do not have one law that covers most legal risks generated by AI.”

An award-winning lawyer and former barrister with globally recognised expertise at the intersection of the law and technology, including digital assets and artificial intelligence, Racheal said UK retailers who have European operations will be bound by the rules of the new EU law.

“As of February this year, the new EU law makes it a compulsory requirement for providers and deployers of AI systems to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems on their behalf. For this purpose, organisations should put in place robust AI-training programmes to achieve this,” said Racheal.

Under the new obligations, they will be required to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems on their behalf. 

Secondly, under the EU legislation the Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI and the hierarchical categorisation of that risk. 

It classifies AI systems into four different risk levels: unacceptable, high, limited, and minimal risk. Each class has different regulations and requirements for organisations developing or using AI systems. 

“If you want to provide AI systems across Europe, failure to comply can result in damning fines which could be as high as 35 million euros or 7 percent of a company’s global annual turnover for the previous year  —whichever is higher.” 

“We need to be able to measure and manage human governance and intervention which is key to managing AI risks on a day-to-day basis,” she said.

In the UK, the absence of a cohesive AI Act means that businesses have to seek the right legal avenues to ensure the technology is not in breach of any UK laws.

One of the key areas of concern is data protection, a major stumbling block for retailers in terms of rolling-out of AI dependent technologies such as facial recognition (FR), which many businesses are looking to introduce as a means of preventing prolific and persistent offenders from entering stores. 

To this end, the Information Commissioners Office (ICO) has developed a user-friendly AI Toolkit to allow businesses to understand the range and restrictions and how they apply to artificial intelligence.

The toolkit—which can be found on www.ico.org.uk—provides guidance for data protection-compliant AI, as well as how help businesses interpret data protection law as it applies to AI systems that process personal data. 

The guidance:

provides a clear methodology to audit AI applications and ensure they process personal data fairly, lawfully and transparently;

ensures that the necessary measures are in place to assess and manage risks to rights and freedoms that arise from AI; and supports the work of ICO investigation and assurance teams when assessing the compliance of organisations using AI.

The guidance is not a statutory code, but it contains advice on how to interpret relevant data protection law as it applies to AI and provides recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate.

In terms of the spirit of the human oversight of AI to satisfy the ICO’s requirements, businesses should conduct a full Data Protection Impact Assessment (DPIA) as to how the technology could impact individuals and also look at safeguards to avoid falling foul of existing data protection legislation. 

Ilona Bateson, a commercial associate and retail sector specialist at Charles Russell Speechlys who advises on commercial contracts, especially in service supply related to technology, digital platforms, social media, marketing, and advertising, said: “AI data sets must be accurate and when engaging third party providers, contractual protections such as warranties and indemnities should be included.” 

“To remain competitive, businesses need to understand both the opportunities and challenges posed by AI. Retailers may look to bolster their risk teams with AI specialists, whilst also providing training for teams such as marketing where AI is regularly used—for example, in the creation of ads themselves and the media strategy used to deliver them to the end customer.”

Whether they are a retailer on the receiving end of deep fake scams or a business seeking to harness the power of the technology to protect itself against more malign elements of AI, businesses recognise that it is here to stay and that they need to tread carefully as to its application in the real world.  

The rise of the machines is both an international arms race and a Mexican stand-off between free market innovators and risk regulators—both of whom have their eyes on the prize and their twitchy fingers on the triggers. 

Leave a Reply



(Your email will not be publicly displayed.)

Captcha Code

Click the image to see another captcha.



iFacility CCTV and Alarm Installation