Full data rights can prevent big tech from taking away privacy


Individuals They don’t have to fight for their data privacy rights and be responsible for every consequence of their digital actions. Consider a comparison: people have the right to safe drinking water, but they are not encouraged to exercise that right by checking water quality with a pipette every time they drink from the tap. However, regulatory agencies are working for everyone to ensure that all of our water is safe. The same needs to be done for digital privacy: it’s not something the average user, or should be expected to, personally be capable of protecting.

There are two identical approaches that must be pursued to protect the public.

One is the better use of class or group actions, if not known as collective remedial actions. Historically, it has been limited to Europe, but by November 2020 the European parliament passed a measure requiring all 27 EU member states to implement measures that allow for collective remedial actions across the region. Compared to the US, the EU has stronger laws that protect consumer data and promote competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists. to force big tech companies to change their behavior even in cases where human harm is very short.

Class action lawsuits are often used in the United States to claim financial damages, but they can also be used to force policy and ethical changes. They can collaborate on campaigns to change public opinion, especially consumer cases (for example, by forcing Big Tobacco to admit the link between smoking and cancer, or by preparing laws on car seatbelt). They are powerful tools when there are thousands, if not millions, of the same individual damage, which adds up to help prove the cause Part of the problem is getting the right information to get it first. Government efforts, such as a lawsuit filed against Facebook in December on Federal Trade Commission (FTC) and a group of 46 states, important. As tech journalist Gilad Edelman put it, “According to the lawsuits, destroying user privacy over time is a kind of harm to consumers – a social network that does little to protect users’ data. a small product – pointing Facebook from a monopoly to an illegal one. ”In the U.S., according to the New York Times this was recently reported, private lawsuits, including class actions, have always “relied on evidence known to government investigations.” In the EU, however, it is the other way around: private dealings could open up the possibility of administrative action, bridging the gap between EU-wide legislation and national regulators.

Which brings us to the second approach: an anonymous 2016 French law called the Digital Republic Bill. the Digital Republic Bill one of the few modern laws that focus on automated decision. The law now applies only to administrative decisions taken by the algorithmic sectors of the public sector. But it does provide a sketch of what the laws will look like in the future. It states that the source code behind such systems should be publicly available. Anyone can request that code.

Importantly, the law allows advocacy organizations to request information about the operation of an algorithm and the source code behind it even if they do not represent a specific individual or claim to have been allegedly harmed. The need to find a “perfect complainant” who can prove damaging in order to file a suit makes it very difficult to resolve systemic issues that cause the same data damage. Laure Lucchesi, director of Etalab, a French government office that heads the bill’s administration, says the focus on algorithmic accountability law is ahead of its time. Other laws, such as the European General Data Protection Regulation (GDPR), place too much emphasis on individual consent and privacy. But the same data and algorithms need to be controlled.

The need to find a “perfect plaintiff” who can prove damaging to file a suit makes it very difficult to resolve systemic issues that cause the same data damage.

Apple promises in an ad: “Right now, there’s more private information on your phone than in your home. Your locations, your messages, your heart rate after a run. These are private matters. And they need you. Apple reinforces this individualist’s mistake: by not saying that your phone stores more of your own data, the company is actually activating vital data from your conversations with your services. provider and so on. The idea that your phone is digitally equivalent to your filing cabinet is a convenient illusion. Companies really don’t care about your own data; that’s why they can pretend to lock it in a box. The amount lies in the assumptions taken from your interactions, which are also stored on your phone-but that data is not yours.

Google’s takeover of Fitbit is another example. Google promises to “not use Fitbit data for advertising,” but the useful predictions that Google makes do not rely on individual data. As a group of European economists arguingd in a recent paper presented by the Center for Economic Policy Research, a London-based think tank, “it is enough for Google to associate combined health outcomes with outcomes that are not for health though to a subset of Fitbit users who did not choose from some use of their data, to predict the health consequences (and thus the possibilities of ad targeting) for all who did not -Fitbit users (billions of them). ”The Google-Fitbit deal is essentially a group data deal.It puts Google in a key market for heath data as it moves it to drop a variety of data set and monetize from factors used in the health and insurance markets.

What policy makers should do

The draft papers seek to fill this gap in the United States. In 2019 Senators Cory Booker and Ron Wyden introduced one Law of Algorithmic Accountability, which was next suspended by Congress. These companies should have conducted algorithmic impact assessments in certain situations to assess whether there is bias or discrimination. But in the United States the important issue is that it is likely to be the first to be discussed in laws applying to specific sectors such as health care, where the risk of algorithmic bias is exacerbated by the varying effects of the pandemic on groups. US population.

At the end of January, the Public Health Emergency Privacy Act introduced to the Senate and House of Representatives by Senators Mark Warner and Richard Blumenthal. This action will ensure that data collected for public health purposes is not used for any other purpose. It prohibits the use of health data for discriminatory, irrelevant, or abusive purposes, including commercial advertising, e-commerce, or attempts to control access to employment, finance, insurance, housing. anan, or education. This would be a good start. Going forward, a law applicable to all algorithmic decision-making should, inspired by the French example, focus on tough accountability, strong management control in decision-making to be judged. of data, and the ability to audit and evaluate algorithmic judgments and their impact on society.

Three elements are needed to ensure difficult accountability: (1) clear transparency about where and when automated judgments are made and how they affect individuals and groups, (2) the the public’s right to offer meaningful access and call on those in authority to justify their decisions. , and (3) ability to enforce penalties. Essentially, policymakers need to decide, as the EU has now proposed, what constitutes a “high risk” algorithm that needs to achieve a much higher standard of scrutiny.


Clear Transparency

There needs to be a focus on public scrutiny of automated decision making and the kinds of transparency that lead to accountability. This includes revealing the existence of the algorithms, their purpose, and the training data behind them, as well as their effects – whether they lead to disparate outcomes, and which groups if so.

Public participation

The public has a fundamental right to call on those in power to justify their decisions. This “right to request responses” should not be limited to participation in the consultation, where people are asked for their input and officials proceed. It should go hand in hand with empowering participation, where public access is mandated prior to the launch of algorithms that pose a significant risk to the public and private sectors.

Penalties

Ultimately, the power of consent is the key for these reforms to succeed and for achieving accountability. It should be mandatory to establish auditing requirements for data targeting, verification, and curating, to equip auditors with this baseline knowledge, and to empower bodies. to manage the enforcement of penalties, not only to resolve the damage after the fact but to prevent it.


The issue of data -driven collective damage affects everyone. A Public Health Emergency Privacy Act is a first step. Congress should immediately use the lessons from implementing that work to develop laws that specifically focus on collective data rights. Only through such action can the U.S. avoid situations where inferences from data sources are collected the likelihood of people opening housing, employment, credit, and other opportunities in the coming years. .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *