Featured

Published on May 26th, 2020 📆 | 4357 Views ⚑

0

US FTC Publishes Guidance On Using Artificial Intelligence And Algorithms – Technology


https://www.ispeech.org/text.to.speech

To print this article, all you need is to be registered or login on Mondaq.com.

On April 8, 2020, the US Federal Trade Commission
("FTC") published a business blog, titled
"Using Artificial Intelligence and Algorithms" (the
"FTC Blog"). The FTC Blog suggests that,
while the use of AI technologies and algorithms has immense
potential for improving welfare and productivity, it also presents
risks, including the possibility of unfair or discriminatory
outcomes or the perpetuation of existing socioeconomic disparities.
The FTC's recommendations, which culminated from its broad
experience, law enforcement actions against entities engaged in AI
and automated decision-making (including, its November 2018 hearing
to explore AI, algorithms, and
predictive analytics
), and prior publications on the use of big
data analytics and machine learning (including its 2016 Big Data: A Tool for Inclusion
or Exclusion
report, which offered guidance to companies
on how to reduce opportunities for bias), underline that the use of
AI tools should be "transparent, explainable, fair, and
empirically sound, while fostering accountability."

This blog summarizes the FTC's recommendations and guidance
on the use of AI technologies and algorithms to make decisions
about consumers, and how companies can manage consumer protection
risks that could arise from the use thereof.

Transparency on Use of Automated Tools and Collection of
Sensitive Data

The FTC cautions against using automated tools in ways that
could deceive customers, including by using "chatbots, fake
followers, phony subscribers, and bogus 'likes'",
which could rise to an FTC action.  The FTC also highlights
the importance of transparency when collecting sensitive data,
noting that secretly collecting audio or visual data to train
algorithms could also give rise to an FTC action.

Additionally, entities that make automated decisions
relying on information obtained from a third-party vendor may be
required to provide consumers with an "adverse action"
notice. For example, under the US Fair Credit Reporting
Act
("US FCRA"), vendors that
collect consumer information to automate decision-making about
eligibility toward insurance, housing, credit, employment, or
similar benefits and transactions, may be considered a
"consumer reporting agency." This may trigger additional
duties under the US FCRA for entities that use such reports or
scores as a basis to, for instance, "deny someone an
apartment, or charge them higher rent." In such cases,
entities must provide the affected consumers with an adverse action
notice, which should inform the consumers about their right to
access the information reported about them and to correct
inaccurate information.

Explaining Algorithmic Decision-Making to
Consumers

The FTC recommends companies to disclose to consumers the
principal reasons for why they were denied something of value, such
as the reasons for being denied credit. It notes that companies
must be specific in their reasoning, which requires them to be
aware of what data were used in their models, aware of
how such data were used to arrive at a decision and, above
all, able to explain how AI technologies or algorithms made their
decisions to the consumer. Similarly, companies that use a
behavioral scoring model or other automated tools to change the
terms of a deal, including reducing consumers' credit limits,
must also inform their consumers.

For companies that use AI algorithms to assign risk scores to
consumers, under the US FCRA, such companies must disclose key
factors that affect such scores rank-ordered by importance.
Specifically, the US FCRA requires that consumers be given notice
if their credit score is used to deny them credit, or offer them
less favorable terms. Consumers must also receive a description of
their score, including its source and the range of scores under the
credit model used, and, at a minimum, four key factors that
adversely affected their credit score, listed in the order of their
importance based on their effect on their credit score.

Ensuring Fairness in Decision-Making

The FTC enforces the US Equal Credit Opportunity
Act
("US ECOA"), which
is a US statute that "prohibits credit discrimination on the
basis of race, color, religion, national origin, sex, marital
status, age, or because a person receives public assistance."
The FTC notes that federal equal opportunity laws (including the US
ECOA) may be relevant to AI decision-making, and therefore,
companies should be mindful of such laws when using AI technologies
in algorithms. Notably, under the US, the FTC may challenge a
company's use of AI to make credit decisions based on factors
covered under the US ECOA  if such use results in a
"disparate impact" on particular ethnic groups.
Therefore, to manage consumer protection risks that may be inherent
in the use of AI technologies and algorithms, companies that use AI
in the foregoing manner should rigorously test their algorithm to
avoid discriminatory outcomes. The FTC further recommends that when
evaluating an AI algorithm or tool, companies should focus on the
inputs to the AI model, as well as the outcomes, as a facially
neutral model may produce an illegal disparate impact on protected
classes based on the inputs to or as a result of the computations
made by the AI model.

Consumers must also be given access and the opportunity to
correct information used in AI decision-making. For instance, under
the US FCRA, adverse action notices to consumers must include the
source of the information that was used to make an adverse decision
to the consumer's interests, as well as notify consumers of
their access and dispute rights. Where companies use consumer
credit information or credit reports to make decisions about a
consumer, such companies should consider providing the consumers a
copy of the information they relied upon to make such important
decisions, and allowing them to dispute the accuracy of any such
information used.

Using Empirically Sound and Robust Data and
Models

Entities that provide consumer credit information or credit
reports about consumers to third parties to "make decisions
about consumer access to credit, employment, insurance, housing,
government benefits, check-cashing or similar transactions"
may be considered consumer reporting agencies, and, if deemed to be
a consumer reporting agency, must comply with the US FCRA. Such
entities have an obligation to put in place reasonable procedures
to ensure maximum possible accuracy of consumer credit reports,
including maintaining the accuracy and currency of credit
information about consumers, and providing consumers with access to
their own information, along with the ability to correct any
errors.





The obligation to ensure data accuracy similarly applies to
entities that are not consumer reporting agencies, but provide
their customer data to credit reporting agencies for use in their
automated decision-making. Such "furnishers" under the US
FCRA may not furnish data that they have reasonable cause to
believe may not be accurate. Data accuracy and integrity must be
ensured through written policies and procedures. Furnishers also
have an obligation to investigate disputes from consumers, as well
as disputes received from the consumer reporting agency.

AI models must also be validated and revalidated to ensure that
they operate as intended, and avoid discrimination. Lending laws,
for instance, encourage the use of AI tools that are
"empirically derived, demonstrably and statistically
sound." Such AI tools must rely on data derived from an
empirical comparison of sample groups, and must be created and
validated using accepted statistical principles and methodology.
They ought to be periodically revalidated by the use of appropriate
statistical principles and methodology, and adjusted appropriately
to maintain predictive ability.

Accountability For Compliance, Ethics, Fairness, and
Non-Discrimination

Big data analytics could result in bias or other harm to
consumers. To avoid that outcome, any operator of an AI algorithm
should ask itself four key questions: (a) How representative is the
data set? (b) Does the data model account for biases? (c) How
accurate are the predictions based on big data? (d) Does the
reliance on big data raise ethical or fairness concerns?

Algorithms must also be protected from unauthorized use.
Entities engaged in this sphere must also increase their awareness
on and take measures against new technologies, i.e. voice-cloning
technologies, that have the potential of being misused. Such
companies should assess the means of holding themselves
accountable, and consider using independent standards or
independent expertise to determine compliance of their AI
tools.

Concluding Remarks

The development of regulations specific to AI is still at an
early stage both in the US and in Canada, but organizations are now
taking a proactive stance in implementing and developing
responsible and ethical AI principles, which will better prepare
such organizations for upcoming regulatory changes, mitigating
their risks associated with AI, and put them in leadership
positions as responsible corporate agents. The FTC Blog provides
useful practical guidance for all companies that have commenced
deploying AI-enhanced automated-decision tools in a consumer
context.  It provides helpful, concrete examples of relevant
compliance requirements.

It is interesting to note that many organizations have
published
AI-related frameworks that discuss very
similar principles to the ones examined in the FCT Blog (including
on "accountability", "transparency" and
"fairness"). As discussed in previous TechLex blogs, many
of these frameworks focus on ethical principles (see our discussion
on AI Policy Framework released by the International Technology
Law Association
and the OECD), all of
which add to a growing body of regulatory and industry guidance on
the responsible use of AI. Moreover, it is increasingly apparent
that our clients will seek guidance not only on the core principles
that underlie the responsible deployment of AI systems, but also in
relation to effective governance processes to implement such
principles. In this context, the Singapore Model
Artificial Intelligence Governance Framework
is
particularly interesting as it does not simply state ethical
principles, but it also links them to precise and concrete
governance measures that can be implemented by organizations.

To view the original article click here

Originally published May 1, 2020.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

Source link

Tagged with:



Comments are closed.