Your bed isn\u2019t able to move between rooms automatically with only a wireless phone request. Your toaster can\u2019t make a bottle of water. Your garage door doesn\u2019t wash itself. The bed, the toaster and the garage door each perform a specific function well \u2014 the function we need \u2014 nothing more nothing less.\nBut, what if on Monday your bed sensed that you should be at the gym at 9 a.m. and vibrated to force you to get up. Then the toaster didn\u2019t turn on because it decided that you didn\u2019t need those extra carbs in the bagel. It was helping you. And maybe you had been doing a lot of traveling, and the garage door knew that the ergonomics of traveling too much would be bad for your spine, so it didn\u2019t open when you got in your car. Welcome to the intelligence world of smart A.I.\nCan A.I. machines, agents and\u00a0robots\u00a0be too\u00a0smart? Just because we could design a machine to be\u00a0intelligent, doesn\u2019t mean that we should.\nRobots attempt to imitate human behavior. Then isn\u2019t it logical that if ethical people can make\u00a0unethical choices, that ethical robots could make unethical choices?\nThe moral compass of machines\nHumans have morality. These guiding principles help us make the distinction between right and wrong or good and bad behavior. This concept centers around ethics, the philosophy to examine right and wrong moral behavior with ideas such as justice, virtue or duty.\nWhen we think about our car, we might be interested in fuel economy. On reflections of our health, topics like comfort and lifestyle come to mind. And when our thoughts migrate to nature, we may think about natural selection and survival of the fittest.\nThe pontification of morality and virtue lands us quickly in the world of consequentialism. This doctrine holds that the morality of an action is to be judged solely by its consequences. The actions can have multiple and conflicting outcomes. If we as humans have trouble making these decisions, how are we going to program machines to make them? Utilitarianism could be a solution. We have more than one choice when deciding how we design machine intelligence.\n\nConsequentialism: helps determine whether an act is morally right\u00a0only\u00a0based on consequences.\nActual consequentialism: adds that moral rightness depends on the\u00a0actual\u00a0consequences.\nDirect consequentialism: assesses whether the act is moral based on the act\u00a0itself.\nEvaluative consequentialism: shifts the morality to the\u00a0value\u00a0of the consequences.\nHedonism: an entertaining derivative of action, determines moral rightness based on\u00a0pleasures and pains\u00a0of the consequences.\nMaximizing consequentialism: depends on which of the consequences are\u00a0best\u00a0(versus average).\nAggregative consequentialism: focuses on moral rightness within function of the values of the\u00a0parts\u00a0of those consequences.\nTotal consequentialism: assesses moral rightness based on the total or\u00a0net good\u00a0of the consequences.\nUniversal consequentialism: is the assessment of moral rightness for\u00a0all\u00a0people involved in the consequences.\nEqual consideration: determines moral rightness based on an\u00a0equality\u00a0of the consequences among the parties involved.\nAgent-neutrality:\u00a0moral rightness does not depend on whether the consequences are evaluated from the perspective of the agent or observer; it gives every agent the aim of\u00a0maximizing utility.\n\nLet\u2019s just quickly program morality into the machine and get on our way. It turns out that programming morality is complex, even before we get to the evaluation of outcomes experienced through machine intelligence or robotic involvement.\nLinking machine intelligence to ethical philosophy\nRoboethics, or robot ethics, is how we as human beings design, construct and interact with\u00a0artificially intelligent\u00a0beings. Roboethics can be loosely categorized into three main areas:\n\nSurveillance: the ability to sense, process and record; access; direct surveillance; sensors and processors; magnified capacity to observe; security, voyeurism and marketing.\nAccess: new points of access; entrance into previously protected spaces; access information about space (physical, digital, virtual); objects in rooms, not files in a computer, e.g. micro-drones the size of a fly.\nSocial: new social meaning from interactions with robots that implicate privacy flows; changing the sensation of being observed or evaluated.\n\nRobots do not understand embarrassment. They don\u2019t have fear, and they are tireless and have perfect memories. Designing robots that spy, either on your back porch or while your car is parked, brings into question how surveillance, access and social ethical considerations will be addressed as we further develop algorithms that assist humans.\nWe've heard about machine\u00a0intelligence agents\u00a0to enable ubiquitous wireless access to charge our mobile phones autonomously. We\u2019ve fantasized about eating pancakes in bed while robots serve us (or maybe that was just me). There have been a lot of technological advances since George Orwell\u2019s\u00a01984\u00a0ramblings about the risk of visible drones patrolling cities. Or we could just reject the Big Brother theory altogether and join the vision of\u00a0Daniel Solove, where we live in an uncertain world where we don\u2019t know if the information collected is helping or hurting us.\nThe First Amendment appears like a logical addition. But how do we balance excessive surveillance with progress without violating the First Amendment\u2019s prohibition on the interference with speech and assembly?\nAs we answer a question, three more rise to the surface.\nWhere is machine learning being used?\nHow much sensitivity do we design into machine intelligent beings? How much feeling should we architect into an armed drone? Should the ethical boundaries change if we\u2019re simply designing a robotic vacuum cleaner that could climb walls? Where do we make the line between morality and objectives? You better cook my toast today. But tomorrow, I\u2019m OK if the refrigerator is locked shut because I have exceeded my caloric intake for the day.\nSociety, ethics and technology will experience the heavy integration of rights and moral divisions over the next 10 years. Who designs the rules, processes and procedures for autonomous agents? This question remains unanswered.