The field of machine learning and artificial intelligence has grown and evolved enormously in the last few years. Not only does machine learning underpin various daily tools many of us use, like virtual assistants or online recommendation engines, it’s also likely that AI will someday soon play a larger role in making medical diagnoses, discovering new pharmaceutical drugs or creating novel materials.

But despite all these remarkable possibilities, there’s also the potential that AI might cause more harm than good — sometimes inadvertently, as in the case when AI makes biased predictions, or sometimes intentionally, when it’s used to manufacture misinformation or convincing deepfakes. Such worrisome developments are highlighting the growing need for more discussion and more regulations around how AI can be used ethically.

This is especially important now as AI models are becoming increasingly powerful on one hand, and are playing an increasingly influential role in the decision-making processes in various sectors like finance, commerce and healthcare.

If such biases go unchallenged in a future where life-changing decisions are largely automated, it could potentially pave the way to an “administrative and computational reordering of society,” weakening civil society to the point that it becomes fertile ground for the growth of authoritarianism.

Without considering the ethical implications of how AI is developed and implemented, it’s likely that future AI will erode privacy, help spread misinformation, and further worsen inequalities that already exist in our society. As the question of ethical AI continues to evolve, here are some of the main issues that are worrying experts and observers alike.

Read More:   LAUNCHING A SUCCESSFUL DOCTOR APP WITH INAPPS

AI’s Bias Problem

While machines are not biased in themselves, the way a research problem is framed, how data is gathered and interpreted, and how machine learning models are trained can ultimately affect how AI models make their predictions. Sometimes these biases can seep in when data is collected in an uneven way, or when models unwittingly reflect the biases of their human creators. We’re seeing what happens when such baked-in algorithmic biases influence automated decisions in creditworthiness, hiring, admissions, and even who gets paroled and who doesn’t — resulting in unfair and discriminatory practices that would be illegal otherwise.

If such biases go unchallenged in a future where life-changing decisions are largely automated, it could potentially pave the way to an “administrative and computational reordering of society,” weakening civil society to the point that it becomes fertile ground for the growth of authoritarianism.

To combat this problem, some experts are calling for wider use of de-biasing algorithms, while others take on a more broader view in their recommendations for updating civil rights laws to apply to digital technologies. There’s also the possibility of implementing “regulatory sandboxes” to test AI, in order to avoid exploiting testers from marginalized communities, and to more widely “decolonize” the industry. In lieu of actual regulations, some are advocating for establishing a set of self-regulatory best practices that companies and institutions can adopt, in addition to boosting algorithmic literacy in the wider public.

The Need for Explainable, Interpretable AI

As some rightly point out, many of the ethical issues around AI arise from the fact that models often operate like an opaque “black box” of sorts, so that even their creators aren’t totally sure why models make the decisions that they do. Thus, there’s a need for more research into how to make AI that’s more transparent — in other words, creating AI that’s interpretable (ie. where the underlying mechanics of a model are transparent) and also more explainable (ie. being able to know the “why” behind predictions). If explainability and interpretability are taken into account by AI researchers from the get-go, such efforts could potentially help foster public trust in machine learning systems in the long run.

Read More:   Update Apache Fluo Speeds Small Updates to Big Data

AI-Powered Mass Surveillance

Currently, one of the biggest ethical concerns around AI centers around loss of privacy and the rise of  automated surveillance. While most people will agree that using AI to track down and identify criminals is a good thing, the flip side to this benefit is that governments and law enforcement are using those same technologies to gather massive amounts of biometric data. Such practices present a slippery slope where otherwise law-abiding citizens might be targeted and monitored for practicing their right to freely speak out or protest against their government, resulting in potential abuses like “pervasive tracking and surveillance to identity theft” — all automated at scale. This all points to the need for stronger regulations, and for AI companies to incorporate privacy and security measures as a fundamental component, rather than as a reactionary mea culpa after something goes wrong.

Big Tech and ‘Ethics Washing’

Perhaps nothing is more troubling than how big tech companies are approaching the issue of ethical AI. While all of the tech giants seem to have adopted some kind of self-regulatory measures on the ethical use of AI — such as embracing ethics charters or establishing ethics boards — some in the industry have wondered whether these actions are sincere, or merely a ploy of “ethics washing” to avoid stricter government regulation.

As one example of Big Tech’s potentially problematic approach to AI ethics, one can look to Google’s controversial firing of two AI researchers on its ethics board, after they published research warning of the dangers of large natural language processing models. Yet another troublesome instance comes from Amazon, which announced a moratorium on the sale of its facial recognition technology to law enforcement agencies due to privacy concerns, only to have reports surface last week that it plans to install “always-on” cameras using AI technology on its delivery vehicles.

Read More:   Update Startup Run:AI Looks to Improve GPU Utilization for AI Workloads

Such incongruities seem to indicate that big companies’ enthusiastic embrace of self-regulatory actions and ethics boards may stem less from a sincere concern that AI should be used responsibly, but more as a public relations strategy to protect their bottom line and discourage regulatory oversight.

Ultimately, there are a lot of complex questions to untangle in the ethical use of AI: who gets to determine what’s considered ethical and what is not? How can we ensure that algorithms and data are used fairly? What can companies and governments do to strengthen transparency and accountability? There are no easy answers any time soon, but it’s vital that we ask them now.

Image: cottonbro via Pexels