1. According to Lawler, what is “the problem of technology”? What are the moral principles that he appeals to in deploying his argument? And, to what extent do you agree or disagree with his argument?
2. Suppose we were to create a true, computational artificial intelligence (with an AI functionally equivalent to our own self-conscious intelligence). What policies, laws, or regulations should we put into place to protect the AI and ourselves? What moral principles are at work in your account?
3. How do computers make our lives both easier and worse? Give examples of this in your own life. Be sure that you define what you mean by both “easier” and “worse” and use some ethical theory to assess how it is that you reach your conclusion.