As the GDPR evolves to provide greater clarity surrounding AI, the onus is on data controllers to carry out regular quality checks of their automated systems.
The vast scope of GDPR has raised fresh challenges — chief among them is the complex interaction between AI and the GDPR. In particular, this shines a spotlight on Article 22, which concerns automated profiling and decision-making, where the incorrect use of personal data can have huge ramifications for the individuals concerned.
The problem is that the existing AI system logic takes automated decisions without user consent. Since data is the engine behind AI, Article 22 impacts every industry hoping to leverage the power of technology to drive efficiencies through automated means.
In an increasingly data-reliant business landscape, how can organizations reconcile the advent of disruptive technologies and their inherent risks while remaining fully compliant?
The rise and rise of AI
AI continues to escalate in its influence worldwide and is revolutionizing business processes in a way that is no longer theoretical or the material of science fiction — but tangible and immediate.
With the EU representing as much as 21% of global GDP in 2019 [1], EU-based organizations have no choice but to strike the right balance between reaping the benefits of AI and managing it to ensure there are no unintended consequences.
In the UK, the government has championed the flourishing AI sector, underscoring the country’s position as a true leader in emerging technologies, and is working towards making the UK a global center for data-driven innovation. According to a recent forecast [2], AI can be a major contributor to growth, with the potential to add £232bn to the UK economy by 2030.
However, it’s a tale of two halves. Despite the UK’s businesses and sectors increasingly adopting, and investing in AI as part of their future, most are largely unprepared for it. Worryingly, findings [3] reveal that fewer than half of businesses have protocols in place for implementing AI in a safe and ethical way.
This has led the Information Commissioner’s Office to issue a rallying call to action for industry leaders to unite in helping to establish a new framework for data protection for the use of AI. It’s vital that businesses promote greater transparency and integrate data protection measures by design and default into their AI strategies. This is firmly on the agenda for key sector players who are leading by example — for instance, a new code of conduct for the use of AI in the NHS was recently launched to ensure that only the safest and best systems are used.
An evolving relationship
Aiming to instill responsible practices, Article 22 prescribes that AI — including profiling — cannot be used as the sole decision-maker in choices that can have legal or similarly significant impacts on individuals’ rights, freedoms and interests. For instance, an AI model cannot be the only step for deciding whether a borrower is eligible to qualify for a loan.
There are exceptions to the rule in scenarios where the decision is necessary for entering into a contract, when a union or member state law authorizes such decisions — for example, to detect tax fraud — or when the data subject gives his or her explicit consent. An individual is also able to contest the automated decision and obtain human intervention in the first and third exceptions.
Beyond this, organizations face a process of trial and error in terms of applying this to their own systems, with the added pressure of even the smallest mistake potentially causing very damaging consequences.
The grey areas of data protection
If one were to play devil’s advocate, automated decision-making is often justified, such as in cases where an AI tool rejects a job application if the applicant has not provided sufficient information. However, Article 22 is triggered if the application has been rejected by the AI tool despite supplying all necessary information and meeting the criteria to apply for a job.
The crucial determiner here is at what stage of the automated decision-making process was the application rejected — and why.
Undoubtedly, the GDPR is a step in the right direction as it empowers individuals to regain ownership of their personal data. However, one of the major criticisms about the game-changing regulation is its ambiguous language that could result in serious misinterpretation.
Article 22 is designed with an admirable objective at its core, to prevent any unfair bias or discrimination from entering into a decision. Profiling, as part of AI decision-making, could result in repercussions when collecting and processing sensitive data such as race, age, health information, religious or political beliefs, shopping behavior and income.
If misused, the darker side of automated profiling means that the more vulnerable segments of society will bear the brunt of any negative outcomes.
Addressing the conundrum
As a very first step, there is a need to ensure that the Article is understood correctly by all, not just to uphold corporate reputations but — most crucially — to safeguard individuals. There is an element of education that still needs to take place to allow businesses to translate the requirements of the GDPR into their real use cases.
As the GDPR evolves to provide greater clarity surrounding AI, the onus is on data controllers to carry out regular quality checks of their automated systems. The guidelines on conducting Data Protection Impact Assessments (DPIAs) can help to ensure that remedial action is taken promptly to manage any negative impact. Other checks should include ensuring error-free algorithmic auditing and allowing users to contest a decision.
Another effective solution might be for companies to simply negate the requirement of Article 22, by using AI programming so that it flows back one step to allow concerned individuals to collect relevant inputs and make a final decision.
It’s important to recognize that the relationship between adopting the GDPR and the successful ongoing growth of AI doesn’t have to be mutually exclusive and can be complementary. The regulation will not stem the advance and potential of next-generation technology as long as people and businesses are well prepared and focus on the underlying principles of the GDPR — protecting the privacy of individuals and ethical practices. This can lead to an enhanced customer experience and even greater adoption of AI into the mainstream.
Only when organizations put a premium on gaining — and keeping — customer trust, can they truly harness the power of AI in tandem with the GDPR.
This point of view article originally published on Information Age.com. Information Age is a leading publication, catering to business leaders and at the forefront of technology, innovation, key industry news and trends. It has a circulation of approx. 250,000.
Since its launch in 1995, Information Age has been regarded as one of the most respected technology titles in the B2B realm. More than 20 years on from its inception, the publication stands as the UK’s number one business-technology magazine, holding a strong influence over its prestigious readership of IT leaders.
Click here for the article.
References:
[1] https://foreignpolicy.com/2017/02/24/infographic-heres-how-the-global-gdp-is-divvied-up/