increasingly using advanced computational methods like machine learning W r i t i n g

increasingly using advanced computational methods like machine learning W r i t i n g

QUESTION 1: SECURITY (5 POINTS; 500+ WORDS, 3+ citations)

Smart home technologies promise to make your daily life easier and provide you with more data
about yourself. They can help parents monitor their children, track and help you improve your
energy usage, and give you remote access to cameras, lightbulbs, and other devices. However,
as we have discussed in class, the Internet of Things raises a number of security issues, and many
devices have vulnerabilities that make them easy to hack.

For this question, do some research on the security challenges of creating a smart home. You
can review PPT slides, class readings, or find some additional resources. I’ve listed some
examples below, but I encourage you to look for other articles.

In your response, first describe and discuss what you think are three of the most important
security risks to using these devices and tell me why these are risks and/or how they introduce
vulnerabilities in the home. Describe two concrete solutions that could be taken to
reduce/remove these security risks. What are barriers to these solutions being implemented?
Finally, tell me who you think should ultimately be responsible for device securitythe
companies making these devices? Consumers buying them? The government? Someone else?
and why.

Suggested readings to get you started:
How Nest, designed to keep intruders out of people’s homes, effectively allowed hackers
to get in. From The Washington Post.
nestdesignedkeep
intrudersoutpeopleshomeseffectivelyallowedhackers
get/?utm_term=.53c6a1081b20
In the Rush to Join the Smart Home Crowd, Buyers Should Beware. From The New York
Times:
homebuyerssecurity
risks.html
Wroclawski, D. (2020). Data Security and Privacy Gaps Found in Video Doorbells by
Consumer Reports’ Tests. Consumer Reports.

doorbells/datasecuritydataprivacygapsfoundinvideodoorbellsa1122714757/



QUESTION 2: ETHICS (5 POINTS; 500+ WORDS, 3+ citations)
Computer science researchers are increasingly using advanced computational methods like
machine learning and neural networks to infer attributes from text or images. This is fueled in
large part by the rise of social and mobile media, where endusers contribute and label content
as part of their use of these services. Furthermore, advances in technologies like facial
recognition are providing new tools for conducting these kinds of analyses.

For this question, you are going to evaluate a project that applied machine learning techniques
on a large dataset of facial images to identify differences in facial features across heterosexual
and homosexual men and women. This is commonly referred to as the “Gaydar” study because
the researchers claim to be able to identify sexual orientation (via binary classification) with high
accuracy for both men and women, especially when they have more than one image of the
person.

For this question, you should do some research into this study (I’ve linked to some articles below
to get you started), then address the following questions:

1. From an ethics perspective, what are the potential benefits and risks of this study? In
other words, how can building a neural network that identifies sexual orientation from
pictures be used to advance public good? How could it be used in harmful ways?
2. AI researchers often conduct studies like this one to see if it’s possible to infer a specific
attribute. Many of these researchers are only thinking about the “pure science” of the
research rather than how the research could be used outside of the research setting.
Should researchers be *required* to consider potential negative applications of their
research before they start projects like this? Why or why not? And if they identify
potential negative uses, should they be required to take extra steps in their study design,
data analysis, and/or presentation of results to minimize future misuse? What might that
look like?
3. As one of the articles notes, “…on the surface, the study was simply yet another technical
paper detailing a convolutional neural network application.” In fact, the study was
evaluated and approved by Stanford’s IRB, which suggests the IRB determined there
were minimal risks in this study. In light of this, do you think IRBs need to change how
they evaluate risk in computational research? What kinds of additional
questions/assessments might they make for research projects like this one?

Suggested readings to get you started:
Original Study (note: this is an academic article so it’s quite dense and I don’t expect you
to read/understand the technical aspects of the research): Deep Neural Networks Are
More Accurate Than Humans at Detecting Sexual Orientation From Facial Images
AI ‘Gaydar’ And How The Future Of AI Will Be Exempt From Ethical Review Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine








 
Looking for a Similar Assignment? Order now and Get a Discount! Use Coupon Code "Newclient"
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *