Reading-Notes-for-Advanced-Software-Development-in-Python-Course

Ethics

Code of Ethics

PREAMBLE

The short version of the code summarizes aspirations at a high level of the abstraction; the clauses that are included in the full version give examples and details of how these aspirations change the way we act as software engineering professionals. Without the aspirations, the details can become legalistic and tedious; without the details, the aspirations can become high sounding but empty; together, the aspirations and the details form a cohesive code.

Software engineers shall commit themselves to making the analysis, specification, design, development, testing and maintenance of software a beneficial and respected profession. In accordance with their commitment to the health, safety and welfare of the public, software engineers shall adhere to the following Eight Principles:

  1. PUBLIC – Software engineers shall act consistently with the public interest.

  2. CLIENT AND EMPLOYER – Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.

  3. PRODUCT – Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.

  4. JUDGMENT – Software engineers shall maintain integrity and independence in their professional judgment.

  5. MANAGEMENT – Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.

  6. PROFESSION – Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.

  7. COLLEAGUES – Software engineers shall be fair to and supportive of their colleagues.

  8. SELF – Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.

Ethics in the workplace

As a software developer, on a daily basis, it doesn’t often feel like you are faced with moral dilemmas. Usually, you’ll be trying to figure out how something should be implemented, or how to fix some bug. However, moral dilemmas can happen even though we might not be fully aware of them. Consider these situations:

  1. You are asked to create a tool which captures customer email addresses, and then uses the API of a popular career-oriented social network to find and store demographic data about each customer, in order to send targeted marketing campaigns.
  2. You are moving a big new feature from BETA testing to fully rollout. After the feature was deployed in BETA, the decision was made to only give it to your premium users. Because of the tight deadline, there is not time to make an exception for all the BETA testers, so they will lose access to it.
  3. Somebody asks you to implement a new pricing model that charges users visiting the .com version of your site 30% more than users visiting the .co.uk version. This will help your company achieve its revenue targets.

Would you feel comfortable being the engineer behind these product specs? Would you be happy if you were the customer of this product?

t’s easy to look at a product decision and think “that was the responsibility of design/UX/product team”, and it’s true that the person coming up with the ideas might not be a developer, but as engineers we are responsible for the implementation, at least. Being asked to do something at work does not remove your responsibility to consider whether it is a wise idea. This is at odds with the opinions of many developers, according to the most recent StackOverFlow survey. Although nearly 80% of developers thought they did need to consider the ethical implications of their code, 58% thought that upper management was ultimately responsible for the software. The thing is, from a legal perspective, these developers might be wrong.

Courts are applying the same principle to unethical software engineering. In the US, for implementing the software which allowed cars to recognise when they were being tested for fuel efficiency and display false readings, a Volkswagen engineer received a three year prison sentence and a $200,000 fine, despite not being the “mastermind” behind the plan. So, even if you have no interest in doing the right thing for your customers (although I hope you do), you should at least question unethical practices for yourself, because if you built something, as the developer you can be held responsible, no matter what happens to upper management or a product owner. While the problems we work on might not usually pose questions of criminality, the principle is the same.

Ethics in Technology

with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. About $80 billion has been invested in the field. Tech companies are working feverishly on them, with Google-affiliated Waymo among those testing cars in Michigan, and mobility companies like Uber and Tesla racing to beat them. Detroit’s automakers are placing a big bet on them. A testing facility to hurry along research is being built at Willow Run in Ypsilanti.

There’s every reason for excitement: Self-driving vehicles will ease commutes, returning lost time to workers; enhance mobility for seniors and those with physical challenges, and sharply reduce the more than 35,000 deaths on U.S. highways each year.

But there are also a host of nagging questions to be sorted out as well, from what happens to cab drivers to whether such vehicles will create sprawl.

And there is an existential question:

Who dies when the car is forced into a no-win situation?

“There will be crashes,” said Van Lindberg, an attorney in the Dykema law firm’s San Antonio office who specializes in autonomous vehicle issues. “Unusual things will happen. Trees will fall. Animals, kids will dart out.” Even as self-driving cars save thousands of lives, he said, “anyone who gets the short end of that stick is going to be pretty unhappy about it.”

Few people seem to be in a hurry to take on these questions, at least publicly.

It’s unaddressed, for example, in legislation moving through Congress that could result in tens of thousands of autonomous vehicles being put on the roads. In new guidance for automakers by the U.S. Department of Transportation, it is consigned to a footnote that says only that ethical considerations are “important” and links to a brief acknowledgement that “no consensus around acceptable ethical decision-making” has been reached.

Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.

Last year, for instance, a Daimler executive set off a wave of criticism when he was quoted as saying its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”

Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”

But what if the smaller thing is a child?

How that question gets answered may be important to the development and acceptance of self-driving cars.

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.