With great power comes great responsibility. As the details of the Facebook and Cambridge Analytica scandal reveal, the IT industry is still grappling with the question of how to deal with ethical dilemmas. First, do no harm. This is the underlying message of the Hippocratic Oath, historically taken by physicians to show they will abide by an ethical code of conduct. Plumbers, construction workers, law enforcement — almost any professional whose work impacts the public must abide by some sort of ethical code of conduct. There’s one fairly notable exception: technology. While there are organization- and company-specific codes of conduct — like these guidelines from the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers – Computer Science (IEEE-CS) joint task force on software engineering ethics professional practices, there’s no one all-encompassing set of standards that includes the entire industry. Perhaps that’s because, as Yonatan Zunger writes in the Boston Globe, “… [T]he field of computer science, unlike other sciences, has not yet faced serious negative consequences for the work its practitioners do.” But, given the still-emerging details about Cambridge Analytica’s role in building software to help clients manipulate voters, that could be about to change. Computer science and software development have grappled with ethics problems in the past, but it seems these problems are not only happening more frequently, but are also increasing in scale and impact. In 2015, independent tests revealed that Volkswagen engineers programmed cars to cheat emissions standards. In the wake of the 2016 U.S. presidential election, Facebook — among others — is grappling with an epidemic of fake news, and now is inextricably linked to Cambridge Analytica’s weaponization of personal user information. The U.S. is struggling to come to grips with Russian hacking and interference in our elections, and the role social media platforms like Twitter and Facebook played. And the current president campaigned on the promise to build on the existing (or build a new) Muslim registry to track members of that faith. Good versus evil? These are just a few examples of how software can be used for nefarious purposes; there’s no way to know definitively every possible outcome of the development and use of every piece of technology, every line of code. So, it’s up to those who design and build the products, software packages, the apps and solutions that we use daily to do the right thing. That’s a lot of pressure. It’s also difficult to navigate what’s right and wrong if you’re pressured to meet deadlines, or your livelihood’s on the line; a code of ethics can provide context and framework for professionals to fall back on, says Dave West, product owner at training company Scrum.org. And while he’d like to see such a thing, he says it’s understandable that such a diverse-thinking group might not be able to agree on all aspects of what such a code would entail. “I would love to see a standardized industry code of ethics; we do have our own that falls under our mission of improving the profession of crafting software. At the heart of it are our five major values of openness, courage, respect, focus and commitment. And we feel like that is a solid foundation for anyone to fall back on if they are feeling uncertain about any part of their job responsibilities, because they can step back and look at those values and say, ‘Am I doing the right thing, here, based on these things I believe in?’” West says. The debate about ethics in software development has raged for as long as the profession has been around. It can be nearly impossible to assess all the potential applications of a technology, good and bad, and that’s both the beauty and the horror of the issue, says Shon Burton, founder and CEO at recruiting firm HiringSolved, which uses AI to help companies identify diverse talent. Any tool can be a weapon “Any tool can be a weapon depending on how you use it. There’s no way to know every single possible application of a technology. For us, using AI and automation — the stuff I can think about now that we’re close to it, I can see good and bad, and both are easily accessible. For our applications, we can help clients screen for diverse candidates. But we see, also, that it could be used to screen out people with different ethnicities, races, gender. We have a code of conduct internally that we all adhere to. But we understand the potential and the unintended consequences,” Burton says. “If safety came first, the Facebook Graph API used by Cambridge Analytica, which raised widespread alarm among engineers from the moment it first launched in 2010, would likely never have seen the light of day,” Zunger writes. In the absence of an industry-wide set of ethical standards, individuals and even some corporate entities are making public stands behind their values. In December, the NeverAgain.tech movement circulated a pledge to resist “…build[ing] a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable,” the pledge reads. It’s now gathered more than 2,500 signatures. GrubHub CEO Matt Maloney took a lot of heat for his stand against hateful, demeaning and discriminatory actions and language. And Oracle executive George Polisner very publicly resigned his position in response to his former employer’s co-CEO accepting a role in the incoming presidential administration. It can be difficult to know where the line exists between right and wrong in this context, even if you’re walking it. While one standardized code of ethics could be a solution, it may be more important to teach people how to ask the right questions, says Scrum.org’s West. “Personally, I’d love to see more education on teaching ethics than is presently available, especially in a professional context rather than just a course about theory, because ethics in isolation won’t work unless it’s part of a broader professional standards. There’s also an issue, though, of the fact that often, individuals can’t build software alone, and they’re also not making these ‘wrong’ decisions all at once, but incrementally,” West says. Teaching people to ask the right questions involves understanding what the questions are, says Burton, and that everyone’s values are different; some individuals have no problem working on software that runs nuclear reactors, or developing targeting systems for drones, or smart bombs, or military craft. “The truth is, we’ve been here before, and we’re already making strides toward mitigating risks and unintended consequences. We know we have to be really careful about how we’re using some of these technologies. It’s not even a question of can we build it anymore, because we know the technology and capability is out there to build whatever we can think of. The questions should be around should it be built, what are the fail safes, and what can we do to make sure we’re having the least harmful impact we can?” he says. Burton believes, despite the naysayers, that AI, machine learning and automation can actually help solve these ethical problems by freeing up humans to contemplate more fully the impacts of the technology they’re building. “Right now, there’s so much pressure to meet deadlines and there’s market pressure to release products, that it’s taking up developers’, CIOs’, CTOs’ and other IT leaders’ time,” Burton says. “If we can automate more of the processes and relieve some of the human efforts so they can apply better critical thinking, and hopefully head off some of these issues before they get critical,” he says. There’s no one ‘right answer’ here, and a code of ethics certainly won’t put all the ethical issues to rest. But it could be a good place to start if individuals and organizations want to harness the great power of technology to create solutions that serve the greater good. Related content opinion Website spoofing: risks, threats, and mitigation strategies for CIOs In this article, we take a look at how CIOs can tackle website spoofing attacks and the best ways to prevent them. By Yash Mehta Dec 01, 2023 5 mins CIO Cyberattacks Security brandpost Sponsored by Catchpoint Systems Inc. Gain full visibility across the Internet Stack with IPM (Internet Performance Monitoring) Today’s IT systems have more points of failure than ever before. Internet Performance Monitoring provides visibility over external networks and services to mitigate outages. By Neal Weinberg Dec 01, 2023 3 mins IT Operations brandpost Sponsored by Zscaler How customers can save money during periods of economic uncertainty Now is the time to overcome the challenges of perimeter-based architectures and reduce costs with zero trust. By Zscaler Dec 01, 2023 4 mins Security feature LexisNexis rises to the generative AI challenge With generative AI, the legal information services giant faces its most formidable disruptor yet. That’s why CTO Jeff Reihl is embracing and enhancing the technology swiftly to keep in front of the competition. By Paula Rooney Dec 01, 2023 6 mins Generative AI Digital Transformation Cloud Computing Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe