I wrote a few weeks ago that technology doesn’t change who you are, it magnifies who you are. One thing technology can do, however, is question where the ethical line is when it comes to how we apply new technologies. I ran across three things this week that specifically asks the question of where do we draw that ethical line?
Work Smarter, Not Harder – But, can I still get paid like I’m working harder?
The first article came from Fast Company’s Gwen Moran. In her article “Is it unethical to not tell my employer I’ve automated my job?” One employee posted on a message board that they were able to create an algorithm that essentially turned 40 hours of hard work into two hours of smart work. This situation runs parallel to what we see in legal work. Are we paying for results, or are we paying for hours worked? How would we reward a fourth-year associate if they found a technique that took 40 hours of billable work and reduced it to two? Would we pay for the result, or pay for the time? If we pay for the time, then we may be encouraging inefficiency. If we pay for results, we may be placing pressure on employees that the technology may eventually replace them. (The person who created the algorithm, purposefully put errors in the process so that it can’t completely replace them.)
Technology Makes Cheating Easier… and Cheaters Easier to Catch
The second article was a Marketplace report on “How AI is catching people who cheat on their diets, job searches and school work.” We may all fudge our facts from time to time. Whether that is sneaking an unapproved snack on your diet, borrowing a line or two from someone else’s written work, or even when applying for a new job. Well… there’s an algorithm for that. The technology is out there (or is being developed) which makes cheaters easier to catch. Perhaps you want an app that virtually slaps your hand when you sneak a candy bar, or skip a workout. Or you want to detect plagiarism in a law student’s work. Or, you want the actual truth about a job candidate’s experience. Modern technologies and algorithms are there to help. Darrell West from the Brooking Institution’s Center of Technology Innovation says it is hard for us to escape the effects of big data. “Artificial Intelligence can detect cheating just because it can compare what we say with what we do.”
Advancements in Modern Technology Can Even Catch a Killer – But, did I agree to that?
I was catching up on my podcasts after a week in Ireland, and I listened to the two-part series from the New York Times’ The Daily on A New Way to Solve a Murder. Unintended consequences are pretty common in technological advancements. One such technology is genetic testing sites may become the new defacto database for criminal dna testing. People submit their dna to these companies with the desire to find out more about their heritage, but one side effect is that when you have over a million dna samples, you have a very good database to catch a killer who left their dna at a crime scene. The federal CODIS dna system has over 11 million convicted criminals in their system. However, there is a bit of a bias in that there is a large representation of poor black males from urban areas in the system (draw your own conclusions on why that is.) The dna sites like 23 and Me, however, tend to have an older, whiter, and more middle-class slant. While the CODIS system has to have an exact match, the private dna sites are designed to trace family trees. Where’s the ethics in it all? The podcast covers a few cases, including the Golden State killer, who was discovered using these privately collected dna sites. Should there be an opt-in or opt-out for the participants? Where’s the ethical line now in moving criminal dna methodology to the private sector?
It’s hard to argue that there is a huge upside to using the technology in a way that most people who submit to the database didn’t agree to. The question also arises on how the technology changes the way we “prove” what happened. Instead of working a case to narrow the suspects and then using the dna to identify the culprit. The process is turned on its head, and the DNA is used to identify hundreds or thousands of relatives, then working back to narrow the possible suspects. Is that the new ethical line?
As we adapt to the new technologies, and all of the unintended consequences, we have to ask ourselves, as a society, where do we draw the ethical line? It’s not a simple question, or a simple answer. But, it is one that we constantly need to be asking and answering.