Faculty of Law, University of New South Wales (UNSW), member of the IEEE Big Data Initiative Steering Committee, and Greg Adamson, president, IEEE-SSIT, and Associate Professor at the Melbourne School of Engineering, University of Melbourne.

The social implications of technology have been with us for as long as humans have created technology, which is to say as long as we’ve been human. In fact, technology is arguably intrinsic to humans and a major differentiator between humans and other creatures.

In Paleolithic times, stone tools could be used to kill game, or fellow humans. In Greek mythology, Icarus’ hubris was enabled by technology. In our time, headline revelations about National Security Agency spying, Anonymous’ hacking and security breaches at Sony, at Target – you name it – no longer shock us.

And now, appearing on the horizon, the Internet of Things is arising with the “promise” of ubiquitous sensors, big data, analytics to improve our lives and, yet, along with it may come opaque algorithms and a growing sense that, perhaps, George Orwell will be proven prescient.

It’s been said that technology is neither good nor bad, but neither is it neutral. Technology does indeed have major, often unforeseen or poorly understood implications for society. Granted, this is the stuff of daily conversation – How secure is our data? How private are our conversations? How long before a trove of data defines our lives in the eyes of others using an opaque algorithm?

We would argue that the dynamics of the market may blind some technologists to the implications of their work, while for others, creativity is the driver and reflection is an afterthought. Conversely, policymakers too often do not fully grasp the implications of technological developments, and how these interact with existing laws and policies. Where policymakers make mistakes, there can be a significant impact on the community.

We come to the social implications of technology from two different backgrounds, but our interests intersect where automation and the use of algorithms can produce – or reduce – social value.

Our challenge is to grasp the ethical and legal implications and impacts of such tools in a potentially sensitive context. For instance, governments and agencies are accumulating data on everyone: should algorithms be applied to tease out insights, particularly in the name of preventing crime and terrorism?

To ensure that the use of these tools does not spin out of the control of a democratic society that applies them, we need to ask questions. “What do agencies want from such data?” “What biases are inherent in the algorithms that produce results?” Perhaps most importantly: “What legal frameworks should society impose for positive, just outcomes?”

At first blush, algorithms just perform automated analysis at high speed, right? But it’s more complicated than that. Not to put too fine a point on it, but a recent op-ed in The New York Times – “Artificial Intelligence’s White Guy Problem” – points out that cultural biases seep into algorithms. To quote briefly from the article:

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters … Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes … [and] we will see ingrained forms of bias built into the artificial intelligence of the future.”

Evaluation, review, oversight, accountability, legal frameworks all seem appropriate if, for instance, the use of big data and analytics for profiling terrorism suspects has undesirable impacts on some communities.

So what can we do about the non-neutral, societal implications of technology?

The Institute of Electrical and Electronics Engineers (IEEE) has taken to heart its mission to advance technology for the benefit of humanity and has initiated efforts to ensure that policymakers are aware of the implications of technology-related decisions. Without making value judgments, the IEEE’s myriad technical societies can provide policymakers with a sense of the outcomes of various technology choices.

Founded 44 years ago, the IEEE Society on Social Implications of Technology (IEEE SSIT) is working with various IEEE technical societies including the IEEE Future Directions initiatives to learn and share how technologists can integrate an awareness of the social implications of their work early in the conception and design phases.

Specifically, the SSIT is focusing on five distinct areas: humanitarian/development technology, technology and sustainability, technology and ethics, access to technology both in the digital divide and in the STEM (science, technology, engineering and mathematics) sense and technology use from ergonomics to smart cities. (We also think integrating the liberal arts into engineering curricula will be helpful and we’re working with the Massachusetts Institute of Technology (MIT) to do so.)

Externally, we’re bringing our logic in humanitarian/development technology to examine, for instance, the 17 U.N. sustainable development goals and to ask thought-provoking questions: Does a particular technology further a specific goal? Given various technology choices, what are the potential outcomes?

The societal implications of technology is a sprawling, pervasive topic and a few very large genies – climate change, nuclear weapons, for instance – may never be bottled again. But an effort is underway to revive critical thinking on a pragmatic level where it could make a difference going forward.