CS181: Computers, Ethics, and Public Policy
June 1-2, 2011
Session 1. Wednesday, June 1, 9:00-10:00, Gates 104
Anonymity on the Internet
Georgia Andrews, Jeff Gilbert, Michael Repper, Graham Roth, Jeff Wear
Our group will be evaluating the culture of anonymity on the Internet.
We will explore different aspects of the issue of anonymity on the
We examine the technical aspects of the situation: How anonymous can one
truly be on the Internet, or how anonymous does current technology allow
a person to be on the Internet, both in the United States and abroad?
There are a number of tools out there that claim to give anonymity and
untraceability, but do they work?
How does the public interact with them?
How might one try to track down an anonymous user of the Internet?
We then delve into the more social aspects of Internet anonymity: How do
people change under the influence of (real or the illusion of)
What happens when anonymity leaks into the real world—
intentionally, such as while protesting a real-world organization or
event, and accidentally, via hacking or other means?
This then leads to the political questions: What freedoms are allowed
under current laws in the United States and how do these freedoms differ
from one country to another?
Is there a responsibility to track the identities of people or prosecute
people who post anonymous messages that threaten other people online?
We aim to answer some, if not all, of these questions.
However, the fact of the matter is that there is still so much that
remains unknown, masked behind these anonymous identities.
Much research has gone into the personal side of the phenomenon, but
still much work is needed.
The ethical questions, too, loom large.
While we cannot provide a definitive answer to them, we hope to offer
astute insights into where the choice lies, and what the benefits and
detriments of each position are.
Privacy and social
Gary Lee, Pamela Martinez, Barr Moses, Nate
In our project, we touch on the following
five major points:
- Reputation amnesty and digital forgiveness: the idea that after a
certain period, a user’s online history is erased and he/she is
given the opportunity to rebuild their online reputation.
- Qualitative difference between privacy in social networks (as a
class) and privacy in the meat world. In social networks today we
don’t have control over the information we upload, we cannot
differentiate between different groups of people who we want to share
the info versus not people we don’t want to share the info with.
- Data use - explicit (e.g. targeted advertising) vs. non-explicit
(e.g. GPS location storage). There is, and always have been, a vast
amount of data out there about each of us. What has changed in the last
decade is that this data is now being explicitly used. In the past, for
example, Google has had access to user’s emails and could gather
information about them. Today, this kind of information is being used in
- Privacy settings - usually too cumbersome for the average person to
deal with. You need to go out of your way to protect your privacy, the
default is full disclosure. With each new functionality of facebook, for
example, your activity will appear on the news feeds of everyone in your
network. It’s up to you to navigate the labyrinthine privacy
settings and figure out how to conceal these things from
others—and then, you’re still powerless against what the
company itself wants to do with your data. There’s a contract of
sorts: the networks provide a medium for sharing, and you (implicitly)
agree to relinquishing your data.
- That the complexity of privacy control is irreducible for social
networks; insofar as for each media object that is to be shared the
person that can access it must be explicitly set. Even with reasonable
reductions of 500+ contacts into a smaller number of “friend
groups” of varying level of access, the complexity of privacy
control has a non-zero lower bound (ie: the number of decisions made for
The culture of “free”
Elliot Conte, Henry Engelland-Gay, Evan McDonald, Rebecca Poulson, Elina
One of the most exciting aspects for users of the Internet is the
ability to access almost any media for free, be it songs or email or
The constant availability of free services on the Internet has birthed a
unique culture of consumers and providers.
While this provides a wonderful experience for the user, costs are still
incurred that must be paid somewhere.
Our project aims to explore the roots of this “culture of
free” and its novel consequences in a number of specific
Social networking sites allow users to connect to every corner of the
globe, but in return the sites gain access to millions of people’s
personal information and provide enormous advertising opportunities.
While social networking services have thrived recently under new
business models, older media have not adapted as successfully.
Traditional news companies have not yet settled on a workable strategy
to survive under this culture of free services.
Will existing alternatives like blogs or Wikileaks provide suitable and
reliable replacements for older news sources?
How will more everyday media, such as the music industry and
film/television, adapt to deal with the ease at which users can access
their products for free via both legal and illegal sources?
Users are able to circumvent copyright restrictions more easily now than
Clearly, some types of services have adapted well to the challenges of
doing business online, while others seem to be failing.
Within the computing industry itself, will traditional software
companies be able to maintain their economic success in the face of more
reliable open source availabilities?
Are some industries simply incompatible with the culture of free?
And to what extent is this culture simply an illusion created by hidden
WikiLeaks and whistleblowing
Alan Joyce, Ethan Lozano, Robert Schiemann, Adam Ting, Dominique Yahyavi
In 1971, the New York Times
published a collection of top-secret
government documents known as the Pentagon papers.
Leaked by U.S. military analyst Daniel Ellsberg, the Pentagon Papers
revealed a previously untold history of the Vietnam War, as seen from
inside the Department of Defense.
While these documents had been in Ellsberg’s possession since
1969, the public was unaware of them until they were acquired and
published by a major national newspaper.
Last year, 75,000 classified documents detailing the events of the War
in Afghanistan were published by the website WikiLeaks.
Comparable in scope to the Pentagon Papers, this leak marked a new era
At each stage of the process, from obtaining the documents to
distributing and publicizing them, computers and the internet were
Technology enabled and expedited the wide-scale dissemination of state
secrets, providing valuable public insight into an ongoing conflict but
posing a potential threat to the security of U.S. military forces.
Computers and the internet have made it easier than ever to share
information on a massive scale.
In some cases, however, this information has been privileged and
The rise of online whistleblowing, championed by WikiLeaks, has proven
to be an effective and widely recognized effort.
As technology moves all information toward this model of unfettered
accessibility, the consequences for secret information could be
We hope to investigate how the concept of whistleblowing and information
leakage has translated to the digitally connected society that now
Session 2. Wednesday, June 1, 10:00-11:00, Gates 104
Derek Czajka, Joe Kelley, Tola Lawal, Jasmine Mann
Internet commerce is almost completely conducted in soft electronic
In this environment transactions are reversible, and exchanges are
overseen by trusted third parties such as banks and credit card
Furthermore, transactions are rarely anonymous, and records of all
financial actions are dutifully stored.
While this way of conducting business aligns closely with the economic
philosophies of the United States federal government, an opposing
outlook has emerged in the form of Bitcoin.
Based on a 2008 paper by Satoshi Nakamoto, Bitcoin is a digital currency
that is fully anonymous, decentralized, and non-reversible.
In place of the more common idea of a trusted third party, the Bitcoin
economy utilizes strong cryptography to ensure the safety and security
of the currency.
Proponents of Bitcoin praise the system because it is free from
regulation and value manipulation.
Nevertheless, Bitcoin raises serious economic, legal, and political
Opponents fear that Bitcoin would produce an unproductive economy that
would be overly susceptible to market fluctuations and deflation.
Many doubt whether Bitcoin economies should even be legal because
anonymous, decentralized and therefore impossible to tax or regulate.
As a result, Bitcoin economies could be havens for black markets,
undermine the roles of government in capitalize economies, and decrease
Our project explores these issues and attempts to determine what, if
any, place Bitcoin or other digital currencies have in our modern
Licensure and other regulatory strategies to improve software quality
Bryan Huh, Mike Lee, Trevor Metoxen, Max Shulaker, Tenzin Topden
Software plays a significant role in various computer systems,
especially when dealing with extensive disciplines such as
self-stabilization, monetary control, information exchange, and so on.
For these systems to function efficiently, producing quality software
and maintaining security is a must.
There are various models that software companies select during software
development and during the process of controlling the software system.
But the key problem is that the development of software deals with
sophisticated mechanisms compared to hardware manufacturing.
Moreover, the prevalence of software bugs in a computer program’s
source code and design is an inevitable issue nowadays.
Unlike hardware problems, software problems have to be fixed and
controlled with top quality tools, thus becoming a major problem in its
Within this framework, the project presents various strategies that
promote both software quality and credibility.
In addition, the project aspires to analyze the method of software
testing by underlining the certainties and risks of software
Specifically, we will concentrate and examine whether the idea of
creating a required license to practice software engineering, also known
as “professionalizing” the field, would help create a more
uniform and reliable environment.
We will discuss the direct positive ramifications of professionalizing
software engineering and the reliability it would afford us, and include
a discussion on the negative consequences inherently associated in
general with professionals, ranging from issues of customization to
After weighing the pros and cons, and by taking case studies of
licensing attempts in the past—but within the United States and
internationally—we will conclude whether licensing or another
approach would be most effective to solving the above problems.
Reliability of the cloud
Jonathan Candelario, Robert Hintz, Jabari Nyomba
The cloud has shown issues in three key areas.
While there are ways of addressing these issues, addressing one issue
negatively affects the others.
The reliability for cloud servers can be bad at times.
If a server goes down from a power outage or some other unforeseen
event, it can take hours before the client’s data can return.
This loss of time can effect production and cause a strain on a company
trying to access the files.
Also, as was the case with Amazon’s EC2, if something goes wrong
with a server, such as with updating, then clients can lose important
Hackers are able to access data on the cloud because security is
They are able to steal important information from clients from this.
Latency is a big problem with the amount of traffic cloud servers get.
Productivity can become a big problem because of this latency.
The cloud, however, also has some important benefits that make it a
great option for normal computer users.
Users can save data on the cloud on one computer then move to another
computer and work on the same data.
The data isn’t forced to only one machine.
It can be accessed to any other machine that has access to internet.
- Cost Effectiveness.
Clients are able to store data on cloud servers extremely
The Google Books project
MK Li, Maxine Lim, Charles Naut, Michael White
Google Books was launched in December 2004 and has developed into a
widely used resource around the world.
The project involves the scanning and storing of millions of printed
texts into Google’s digital database, further supporting
Google’s objective of organizing the world’s information and
extending it into the offline world.
Since the project’s launch, many top universities have joined
Google in its effort to digitize print resources, unlocking access to
knowledge that was previously unavailable to millions of people.
Although countless users have benefited from the convenience and depth
of knowledge delivered by Google Books, its legality and its effect on
the industry remain unclear.
Since the project’s inception, Google has faced legal challenges
concerning potential copyright infringement relating to its scanning and
distribution of copyrighted works.
Several settlements have been proposed, but the debate on Google Books
has not been resolved.
As recently as March 2011, the project has been subject to new court
rulings concerning its settlements, which leave its future in question.
In our research, we explore this tension between access to information
and protection of publishers’ rights, with an emphasis on the
potential impact on future innovation in the publishing industry.
We examine the cases, positions, and motivations of both sides of the
legal battle, the past and present legal and cultural context, and the
desirability of possible outcomes.
We also survey Google’s response to the publishing
industry’s concerns, including their modifications to the Google
Books program, and address the question of whether limited access to
copyrighted content can benefit the publishing industry.
Session 3. Wednesday, June 1, 11:00-12:00, Gates 104
Jordan Inafuku, Katie Lampert, Brad Lawson, Shaun Stehly, Alex Vaccaro
Downloading consciousness is the idea that people may one day be able to
upload the contents of their mind to a digital medium.
Popular representations of downloading consciousness touch on aspects of
the subject such as immortality, speedup, multiple existence, etc.
This is exemplified in The Matrix,
in which characters upload
their minds to a virtual reality, and one character uploads his
consciousness into multiple copies of himself.
Downloading consciousness is also touched upon in the television show
In the world of Dollhouse,
people are constantly having their
minds wiped and rewritten so that they utilize multiple personalities
Although popular fiction takes many liberties with the concept of
downloading consciousness, the state of current technology suggests that
we are still a ways away from seeing this become a reality.
Current research in the field approaches downloading consciousness
through several avenues.
Projects such as IBM’s Blue Brain attempt to model the brain using
artificial neural networks.
Another avenue is that of brain imaging.
Current brain imaging uses scanning technologies to create detailed maps
of the brain.
Achieving downloaded consciousness will require a much greater level of
detail than that provided by today’s brain-mapping technology.
Lastly, the act of downloading consciousness is believed to require a
hypothetitical technology known as brain-computer interfaces (BCIs).
A brain-computer interface would essentially be a direct neural
connection between a human brain and a computer.
Despite progress in research, many obstacles remain which prevent
downloading consciousness from becoming a daily phenomena.
Several of these problems stem from moral quandaries related to the
For instance, consider the social implications of having multiple copies
of one’s self walking around.
Other problems are more practical in nature.
For example, our technology is not at a point where we can manufacture
artificial networks with the same complexity as human brains.
Despite the fact that downloading consciousness remains a futuristic and
still theoretical technology, as research progresses, more and more
scientists are affirming the possibility that it will one day become a
Levirt Griffin, Omosola Odetunde, Nadja Rhodes
The near field communication (NFC) chip is an exciting technology that
has seen a lot of press lately.
The ability to safely make peer-to-peer mobile transactions, positively
identify documents and ID cards, replace physical keys for your car,
home, office, hotel room, etc., and pair devices in a Bluetooth-like
manner, all with your mobile phone has taken the tech-blogging world by
Google has already promised NFC-integration in its Android Gingerbread
The key to the success or failure of this technology, however, lies in
the ethical rather than purely technological sphere—security concerns
such as eavesdropping, data modification, and physical phone loss pose a
major threat its viability.
The development and resolution of these issues will provide an
interesting topic of discussion.
We begin by exploring the history of development of near field
communication as well as the foundation of the NFC Forum in 2004, a
group started by Nokia, Sony, and Royal Phillips Electronics devoted to
“developing specifications, ensuring interoperability among
devices and services, and educating the market about NFC
More than 140 other organizations have since joined.
We then discuss the many uses of NFC and the new realms in which this
potentially powerful technology could enter.
This could be effectively analyzed by contrasting NFC with its
predecessors RFID and Bluetooth.
Finally, we investigate possible solutions to the ethical challenges
Privacy and GPS
Elaine Chen, Stephanie Ogonor, Matthew Seal, Bridget Vuong
Within the past few decades, advances in global positioning technology
have made it increasingly difficult to keep information about a
person’s location private.
The usage of global positioning systems (GPS) in automobiles has become
commonplace, and nearly eighty percent of phones incorporate GPS
Through a combination of GPS and cell phone triangulation, cellular
companies can closely approximate a person’s location.
The recent discovery of stored location data in the Apple iPhone and
other smartphones has reignited privacy concerns associated with GPS.
The iPhone scandal has highlighted a need for increased protection of
However, it is important that, in procuring those protections,
legislators do not significantly reduce the efficacy of GPS.
In our website and presentation, we discuss the societal and individual
benefits of GPS as well as the strengths and limitations of current
We also explore cases where corporations and governments have infringed
upon the consumer’s right to privacy in the past and use these
cases to suggest additional legislation that could prevent further abuse
of GPS data in the future.
The impact of tablet-like devices on information availability
Shishi Chen, Francesco Georg, Alex Loewi
We are interested in the impact of tablet-like devices on information
Tablets have influenced the kinds of content that people choose to
access, changed people’s expectations of technological interfaces,
increased invasions of privacy, and altered the social interactions of
people in real life.
Our project attempts to understand these phenomena in greater detail and
explore their ethical implications.
In the first part of our project, we look for empirical studies of media
sales on various devices, such as phones, readers, and pads, to see what
kind of impact the devices have had on people’s choices of what to
consume (nom nom).
The emphasis is primarily on books and magazine and newspaper articles,
especially about important genres such as politics and health.
Another part of our project studies the interaction paradigms that have
become commonplace with the extension of phone-like interfaces to
Many expectations have been set by Apple’s proprietary iOS
interface and results in monopolistic control of other kinds of content.
For example, iPads change the way people watch movies, which gives Apple
influence over the movie industry.
Are tablets a cause or effect of privacy invasions?
This is a question we ponder in our project.
Finally, the availability of information on tablet-like devices has
affected the way that people interact with each other in real life.
Teachers taking attendance electronically can move around the classroom
if they have tablet devices, and nurses can access vast amounts of
information as they interact with patients.
On the other hand, even as portable computation devices make asocial
activities more social, they encourage people to engage in these asocial
activities rather than interacting face to face.
Session 4. Wednesday, June 1, 1:15-2:15, Gates 104
iMonopoly: The closed-app store market share game
Brianna Griffin, Will Guyman, Daniel Johnston, Amar Kak
According to classical economics, a monopoly exists when a specific
individual or an enterprise has sufficient control over a particular
product or service to determine significantly the terms on which other
individuals shall have access to it.
An argument can be made that Apple’s approach to running their
iTunes Store business for iOS mobile applications is a monopoly.
In this project, we examine Apple’s policy of approving new Apps,
as well as the rules they have in place for allowing new apps to run on
Is this practice monopolistic and anti-competitive?
While Apple’s complete control over every piece of the iOS
platform does feel ominously monopolistic, at this point intervention is
neither realistic nor advisable.
In the end, the natural course of the mobile marketplace will decide
whether Apple’s “closed” platform dominates and sets the
model for a new ecosystem of centrally controlled software.
If this occurs, government intervention may be necessary to protect
users from a big brother decision engine like Apple.
This project also confronts the question of whether monopolistic
practices as seen in the technology sector need to be looked at
differently than those in the past because of the rapid pace of change
in the industry and society.
Entrepreneurship at Stanford
Katherine Chen, Kamil Dada, Michael Duong, Joel Jean
Last fall, investor Peter Thiel launched his 20 Under 20 Fellowship,
which offers young entrepreneurial students $100K grants and expert
guidance to pursue their tech start-up dreams.
Current students—i.e., nearly all the participants—have to
drop out of school to work on their ideas full-time for a two-year
At first glance, the Thiel Fellowship seems like a great opportunity,
but its under-20 requirement presents a clear ethical dilemma: Is it
right to encourage students to drop out of school?
Given that the vast majority of tech start-ups fail, is it truly in the
participants’ best interest to forgo a college education and work
on such a high-risk project?
Taking the Thiel Fellowship as a case study, we examine the unique
entrepreneurial environment at Stanford—it seems like every
Stanford student wants to start his/her own company—and consider
its causes and consequences.
We interview current students, alumni entrepreneurs, professors, and
representatives from many of the entrepreneurship groups and programs on
campus (e.g., STVP, Mayfield Fellows, SSE Labs, BASES, ASES) to answer
some of the big-picture questions surrounding Stanford’s
entrepreneurship culture: Does Stanford focus too heavily on
entrepreneurship at the expense of a classical liberal arts education?
Does the university train real innovators or simply business leaders?
After all, it’s been said that the various Institutes of
Technology produce stronger engineers than Stanford, but Stanford
engineers end up managing the others.
With so many other universities and regional governments around the
world trying to emulate the dual prosperity of the Stanford-Silicon
Valley connection, we should make sure that we’re truly worthy of
Modern computing and personal genotyping: The ethics of genomics
Maverick Chea, Kristian Gampong, Paul Lee, Brian O‘Connor, Keebuhm Park
Genomics, the study of using one’s genetic makeup to understand
one’s phenotype, is one of the fastest growing fields of modern
The human genome is comprised of 23 strings of hundreds of millions of
With the power of modern day semiconductors and the speed of computing,
this high-throughput discipline not only has decreased the sequencing
time of a full human genome from a year in 2001 to two days, but it also
has decreased the costs from millions to $15,000.
However, the increasing computing power has brought genome sequencing to
the consumer at the cost of multiple ethical and moral issues.
An important issue regards the broad range of information stored in
23andme, a personal genomics company that integrates a social network
with one’s genotype, has been at the helm of a debate regarding
the safety of one’s data.
If one’s disposition to certain genetic defects is available in
the Cloud, will insurance companies and potential employers leverage the
Genetic discrimination could save billions for insurance companies, but
leave millions uninsured.
The act of “playing God” has been in constant debate as
Should parents be allowed to genotype their fetus’s genome, even
if the genome indicated an 80% likelihood of terminal Huntington’s
When a Los Angeles fertility clinic offered to allow parents to
“design” their babies in early 2009 via genomic embryonic
selection, the public was divided.
With the increasing strength and convenience of sequencing, technologies
will allow humans to become more powerful creators, but that control is
interwoven with ethical issues.
This project presents issues including who owns genetic data, the ethics
of using computing technology for genetic discrimination, and the ethics
of designing human life.
Shubuka Mainsah, Ethan Nash, Michael Ortiz, Devon White
In 2002, the Stanford Student Computer and Network Privacy Project
(SSCNPP) conducted a study examining student privacy issues at Stanford.
The study examined how federal laws such as the Electronic
Communications Privacy Act (ECPA) and Family Educational Rights and
Privacy Act (FERPA), along with institutional policies such as
Stanford’s Principles of Privacy and Stanford’s Computer and
Network Usage Policy together impacted student privacy on campus.
Their recommendation was that these documents needed to be updated to
protect student privacy and that the information contained within these
documents should be better propagated to educate students about their
Despite making these recommendations nearly ten years ago,
Stanford’s policy has changed very little to reflect the
Perhaps most disconcerting, the group found that it was possible using
university-sanctioned programs such as Stanford Who to access such
private student information as home and school street address, class
schedule, e-mail activity logs, and physical location if the user was
accessing a cluster computer.
They raised concerns in the ambiguity of terms in both FERPA and ECPA
that could be interpreted in such a way that was harmful to student
As far as university policy, there were also concerns raised regarding
system administrator power and the university’s Principles of
Privacy, which has yet to be updated since 1984.
Our research will use the SSCNPP’s research as a foundation, and
decade to keep up with newer technologies.
We will attempt to discover other security flaws and explore how the
proliferation of newer technologies such as torrent downloading, social
networking and mailing lists affect the balance between the
students’ rights to privacy and the university’s legal
Our goal is to synthesize a more current recommendation to update
Session 5. Wednesday, June 1, 2:15-3:15, Gates 104
Press freedom for bloggers
Eric Conner, Zach Galant, Jeremy Keeshin
Press freedom guarantees the liberty of expression for journalists
through written and electronic media by preventing governments and other
institutions from interfering.
It promotes transparency by freeing the journalist from the bias of any
In 2004, Apple Computer challenged press freedom by filing a lawsuit
against several unknown individuals who leaked secret product
information that got published on several blogs.
Apple wanted the bloggers to identify the sources that gave them the
First the courts found that the information was stolen and a subpoena
could be granted, but later the courts ruled that the identity of the
confidential sources should be protected.
This suit raises an important question about the extent to which press
freedom should be granted to bloggers.
The issue of press freedom has always been crucial for journalists, but
the legal system has been challenged by new issues raised by the
In recent cases such as Wikileaks, and other incidents like the release
of the Pentagon Papers, the freedom of journalists to publish secret
information has been a core issue.
Several ethical considerations arise in these situations.
If one obtains secret information, what should he or she do with it?
When is it okay to release the information?
When should one keep it secret?
Does someone ever have a moral obligation to release secret information?
This report will explore these questions in the context of several press
freedom cases including Apple vs. Does, Wikileaks, and the Pentagon
Leland Farmer, Joe Kenahan, Tom Medina, Jonathan Tilley, Dawson Zhou
We know that computers are getting faster and that the rate of
improvement is accelerating.
A group of thinkers and futurists believe that it is only a matter of
time before computers and artificial intelligence outstrip the human
At that point superintelligent computers could be able to take over
their own design and improvement, leading to the development of even
more intelligent entities on even shorter time scales.
That point in time can be thought of as a singularity because, similar
to the space-time singularity associated with black holes, our ability
to predict what would happen after that point in time breaks down
because we lack knowledge of how a superintelligent system might behave.
Is it possible to get to singularity or will a “technology
paradox” keep us from ever progressing that far?
If singularity is in the future what ethical dilemmas does it create and
how should we plan for them?
In this project we will give a brief background on the exponential
growth of technology along with a comparison between humans and
We will discuss the different possible paths to singularity as well as
the criticisms of each hypothesis.
Theories abound about what the singularity will mean for humanity,
ranging from extinction to immortality.
We will also examine how superintelligence might affect humans,
considering both doomsday and golden age possibilities.
Journalism in the Digital Age
Danny Chrichton, Ben Christel, Alex Valderrama, Aaditya Shidham, Jeremy Karmel
Journalism is cited as being central to democracy.
Indeed, statesman Edmund Burke called this institution the “fourth
estate” of a democratic government in the 18th century.
In the modern era, journalism continues to play a vital role in society.
Our research thus begins with an exposition of the roles journalism
plays and the obligation it has to society.
This exploration motivates a discussion of how journalism has changed in
the digital age.
This begins with an analysis of what started it all: the inception of
the so-called “digital age.”
This includes the development of the Internet and later Web 2.0
We explore these developments and their effects on social expectations.
How did these effects transform the expectations of journalism?
Then, we observe how economics of journalism have changed as a result of
this new era.
In other words, how have the business models that have promoted profits
and sustained membership been forced to change in the digital age?
Are there innovative business models that can save print media in this
tide of restructuring?
Finally, we explore how the changes to the profession and the practice
of journalism have resulted from a transition to the digital era.
We rely on interviews of practicing journalists to chart how the
routines of such individuals are changing as a result of the digital
We also observe the characteristics of online media and report on the
broad structures of journalism that are changing.
This framework motivates ample connections between observed changes in
technology, economics, and journalism.
Furthermore, the process permits us to ask whether new forms of
journalism are fulfilling the obligations that journalism has to
Such analysis allows a thorough interrogation of the nature of
journalism in the digital age.
Smart phones and economic development
Sam Garrett, Ben Goldsmith, Vivian Nguyen, Hee Su Roh, Eunmo Yang
Since their emergence, cellular phones have become rapidly accessible to
all economic classes.
Once exclusively a toy for the rich, cell phones are now utilized by
over half of the world’s population.
Currently, a similar trend is arising among smart phones—a class
of mobile devices equipped with Internet access and GPS location
Previously only available in the developed world, smart phones have
begun to spread to emerging economies.
By 2015, it is estimated that 31% of all African mobile phone
subscriptions will include Internet.
So why does this matter?
Compared with their simpler predecessors, smart phones are more powerful
tools for economic development.
Enabling access to information, banking, and insurance services, smart
phones dramatically increase market efficiency in developing regions.
In addition, smart phones provide communication to lower income markets,
allowing global corporations to greatly expand their consumer bases and
move their products into previously untapped territories.
More importantly, smart phones provide access to medical information,
and in the future, could potentially allow for the cheap, quick
diagnoses of the world’s most deadly diseases.
However, the greatest promise of smart phones is that they provide all
of these benefits at a fraction of the cost of a computer.
Furthermore, they lack the need for a sophisticated and expensive IT
infrastructure that requires frequent maintenance.
Smart phones have the ability to revolutionize the economic market,
especially in the growing economies of the developing world.
With their convenience, affordability, and promises of communication and
information, the possibilities for smart phones in all economies are
Session 6. Wednesday, June 1, 3:15-4:00, Gates 104
The psychology of trust on the Internet
Kat Busch, Caitlin Colgrove, Frank Li, Nora Willett, Remington Wong
Given the vast amount of unpoliced information available on the Internet
today, how does one decide whom to trust?
What happens when companies, governments, or individuals betray that
In this project, we explore methods to establish trust online and how
websites use—and sometimes abuse—that trust.
We examine trust on the Internet from multiple perspectives: the effect
of user interfaces on websites’ reputation, the technical aspect
of encryption and authorization, and the use of reputation systems.
Trust on the Internet often relies on subtle interface cues.
A 2004 study of patients’ evaluation of medical websites found
that certain UI elements had a dramatic effect on the perception of the
Over 94% of reasons cited for mistrust included UI features such as
As a result, trustworthy and untrustworthy entities alike can use
professional-looking website design to promote user trust.
Users also rely on technical cues, such as the lock icon in the browser
address bar, to indicate security.
Security does not imply trustworthiness, however, as SSL certificates
are extremely easy to obtain.
Malicious sites often exploit this misplaced trust to lure unsuspecting
users into providing sensitive information.
The ease with which SSL certificates can be abused undermines their
usefulness as indicators of trust on the Internet.
Many popular websites use systems that allow the users themselves to
control the reputations of other users.
On eBay buyers rate their experience with sellers, allowing other buyers
to decide if sellers will legitimately follow through on sales.
On StackOverflow, a question-answer community for software engineers,
users votes to modify the reputations of other users.
Wikipedia has anonymous contributors, forcing viewers to rely on
citations and style to evaluate articles’ reliability.
We aim to teach users how to establish authority on the Internet and
decide which websites to trust.
Computers making decisions for humans
Kseniya Charova, Lucas Garron, Cameron Schaeffer
Recent advances in technology have brought many benefits and changes to
As the software behind this technology becomes increasingly complex,
humans are able to rely on the automation of tasks that would otherwise
require much time and effort.
However, this complexity also creates problems for which our solutions
are steadily more dependent on the power of computers.
In addition, technology has brought benefits that bring forward new
issues that require ever greater technical creativity to solve.
For example, technological leaps in medicine have led to an increase in
average life expectancy, but left the ever growing elderly population
without adequate resources to support them.
This is particularly a big problem in Japan, where nursing homes are
increasingly turning to robots to relieve the labor force.
Similarly, advances in business, military, and research have led to
systems that are either too large or dangerous for humans to operate;
algorithmically advanced robots may present a possible solution.
Today, robots already have quite a presence, but the technology is still
young and the dangers of giving machines too much responsibility are not
However, as machines become more intelligent, they will be placed in
situations where they will make decisions for humans.
It is important to continue our development with a strong understanding
of the risks.
We cannot guarantee ultimate safety, but we can guard against them.
Our research focuses on how to draw the line between power and safety
when assigning responsibility to robots.
We will also cover the extensive changes that will have to be
implemented by engineers, manufacturers, and users of this technology in
order to ensure that intelligent machines will not pose a serious risk
to human users.
Rex Kirshner, Sameep Mehrotra, Lucas Prokopiak, Jack Reidy, Taylor Savage
From early text-only games to deeply immersive manufactured universes,
virtual worlds have characterized the digital space, and have drawn
millions of users across the globe to participate and interact.
Inhabiting virtual avatars or crafted profiles, users continue to
explore and push the limits of digital social interaction and gaming,
using the anonymity and idealization available online to meet friends to
interact with offline or to establish completely new personas and
Defining virtual worlds as specifically including anthropomorphic
avatars inhabiting a digitally created space, such worlds have
transitioned from purely for gaming and entertainment purposes to
encompass a whole range of applications and use cases, and appealing to
a much broader audience.
Virtual worlds, so often the focus of science fiction, are inexorably
connected to the advent of the digital revolution, and have similarly
transitioned as technology has grown and evolved.
Much like the computer itself, virtual worlds, once only accessible and
utilized by the technical elite and used primarily for the powerful
ability to create imagined, often phantasmal alternate personas, have
more recently gained much broader appeal, and are focused much more on
socializing and creating relationships that will be continued offline.
Though often taking on quirky, stylized appearances, virtual worlds have
recently tended to acquire more characteristics of the real world and
lessened the anonymity barrier, incorporating facebook profiles or using
real-life pictures associated with avatars.
Virtual worlds have become tailored for certain social interactions,
such as specifically for dating, meeting other students within a
university, for gaming while piggy-backing on existing social media
platforms, or even for tracking and working on day-to-day tasks.
Since the text console games of old, virtual worlds have tended to move
away from pure entertainment to immersive social experiences with
concrete ties to real life.
Session 7. Thursday, June 2, 9:00-10:00, Gates 104
Code and the regulation of the Internet
Christophe Chong, Calvin Fernandez, Eli Hart, Jonathan Kuo, Dilli Paudel
In his book Code and Other Laws of Cyberspace,
until recently Professor of Law at Stanford University, argues that the
future shape of the Internet depends on the actions we take to define
He writes that, in order to protect freedom on the Net, we need to take
actions to safeguard the values we cherish.
The action he upholds is educated, government intervention rather than
libertarian market faith.
Lessig believes that we should not sit idle and hope that the Internet
will fix itself.
He criticizes the belief that the Internet is a sovereign entity without
ties to the traditional legal system and warns that this fallacy could
lead to the demise of core values such as free speech, privacy and
anonymity, to name a few.
To maintain these values and to save the Internet from over-regulation,
Lessig proposes—counter intuitively—increased regulation.
To begin, Lessig describes four methods of regulation that apply to both
cyberspace and the natural world:
- Regulation through network architecture (code)
- Societal regulations (what is considered appropriate social behavior);
- The market costs associated with maintaining parts of the Internet;
- The law.
Combined, these work together to regulate our Internet behavior.
Of these, however, Lessig believes the most effective form of regulation
online is through architecture, “[online,] code is law.”
Because of the legal power code holds, Lessig cautions against the
commercialization of code, which he believes will lead to the
privatization of law and increased governmental control.
In response, Lessig proposes that we subject private entities to
increased constitutional controls.
He also argues in favor of open-source code which, like the laws
governing the natural world, can be examined by everyone.
Free expression vs. maintaining social cohesion
Conrad Chan, Anthony Dao, Justin Hou, Tony Jin, Calvin Tuong
In the United States, most people take the right to freedom of speech
extremely seriously and are almost absolute in their determination to
protect that freedom, even as it applies to the Internet.
Thus, American governmental control of cyberspace has been relatively
The governing bodies of other countries, however, such as those in the
European Union and Asia, have taken the initiative to control
information flow on the Internet.
One of the most well known attempts at information control is the
much-criticized “great firewall of China.”
More recently, European countries have been using similar systems, most
notably site blacklists, to prevent the spread of information that their
respective governments deem to be “harmful” or
Such actions have roused great debate regarding free speech in many
On one hand, governments seek to protect their citizens and shield them
from hate speech, slander, and the likes of child pornography.
On the other hand, netizens fight for their right to free speech and
unregulated access to information on the web.
Laws such as the Telecoms Reform Package have been passed to allow
censorship by Internet Service Providers (ISPs), limiting freedom of
The content that ISPs decide to block, however, are arguably harmful to
society, and it is for this reason that the government passed such laws
in the first place.
In this report, we will explore the methods that different governing
bodies use to censor Internet content and examine the consequences of
such amendments to free speech, including effects on the vast amounts of
information dispersed online and reactions from the hundreds of millions
of Internet users.
Freedom of information: China
Suzanne Aldrich, Marty Hu, Albert Lai, Khanh Le, Chanya Punyakumpol
Our group primarily focuses on Internet freedom in China.
China attracted our attention because it’s one of the
world’s fastest growing economies and issues that affect the
freedoms of its population will ultimately steer the development of
Our group will provide a primer on how the Great Wall works, as well as
discuss its implications on Chinese citizens and the international
Some events that we’ll examine and analyze include Google’s
decision to withdraw from China and the requirement of Chinese citizens
to install monitoring devices on all computers.
We’ll also look at how Chinese citizenry has responded to this
For example, we will look at the development and use of Tor’s
We will also pay special attention to the rise of the “human-flesh
search engine” and websites that create them.
Both of these services bypass China’s censorship software in
different ways and allow citizens to organize collectively to free
themselves from government monitoring and government-sponsored mass
As part of our analysis we will provide our own interpretation on the
moral issues raised by the Great Firewall as well as the ways by which
it is circumvented.
We will look at China’s response and the possible ways in which
the situation might evolve.
Integral as China is in today’s global economy, the results will
carry great implications not only for people of China, but for the rest
of the world as well.
Interaction between technology and Chinese communism
Victoria Kwong, Leon Lin, Chloe Yeung, Lance Zhao
One of the defining characteristics of Chinese culture is the influence
of the Chinese government.
Starting from early Confucian beliefs regarding sharing and openness and
strengthened by the recent ascent of Communism, the Chinese government
has played a significant role in shaping technological developments in
There is little sense of privacy on the Internet as the Chinese
government maintains the right to survey citizen usage of the Internet,
and thus there is also very little freedom of speech on the Internet, as
speech and the expression of ideas are heavily regulated by the Chinese
Furthermore, recent censorship of many of the most popular websites in
the West, such as that of Facebook and Google has brought to light the
heavy censorship of the Internet by the Chinese government.
On the other hand, the idea of sharing promoted by Communism has also
made the government less restrictive on some issues, primarily that of
With some statistics citing the fact that as much as 90% of software in
China is pirated, the Chinese software industry has had a difficult time
growing due to piracy, despite the clear presence of talent in China.
Yet, technology also seems to be shaping Chinese culture and beliefs.
A recent survey showed that 80% of Chinese citizens strongly disagree
with Chinese censorship of the Internet, and despite heavy censorship, a
younger generation of Chinese citizens are being exposed to Western
Thus, while emerging technologies in China have illustrated the
continued presence of strong Communist beliefs in China, it also appears
to be facilitating a shift towards more Western beliefs.
Session 8. Thursday, June 2, 10:00-11:00, Gates 104
Giancarlo Daniele, Tunmise Olayinka, Azmaan Onies
There is a change in our culture from centralized systems, to
Bittorrent, Cloud Computing and Bitcoins are some examples that
represent this change.
Even though Bitcoin is fairly new (2009), many have adopted it very
There are over 6.2 million bitcoins in circulation with a valuation of
over $7 per bitcoin.
Using open source software, anyone on the internet can create Bitcoins,
the first decentralized digital currency of its kind.
Transferred directly from person to person and free from financial or
legal regulation, Bitcoins represent a modern, networked approach to
While it is still too early to estimate the success of internet-based
currency systems various online retailers currently recognize and accept
the Bitcoin as a valid currency.
If this trend continues, legal and ethical questions would undoubtedly
arise as a result of the widespread use of a currency that is not
regulated in a traditional sense.
Bitcoins have many of the same characteristics as traditional, paper
currencies like the American dollar.
For example, like the silver that once backed the American dollar,
Bitcoins are scarce in their own way.
Computed by computer algorithms, the amount of Bitcoins that can exist
in the world is capped at a given number based on the complexity of
generating new Bitcoins.
Similarities aside, there are some stark differences too.
The consequence of everyone using such a currency can be examined by
comparing it to the Euro.
However, even with the Euro, there is a central governing body making
decisions about its value, and maximizing benefits to its member
With Bitcoins, there is no such authority.
This free floating nature would lead to interesting possibilities.
For example, the police won’t be able to “seize”
bitcoins from an illegal trader.
Technological trends in Latin America and their social and economic impact
Justin Heermann, Alvin Heng, Chamal Samaranayake, Lilly Sath
Latin American countries invest less in science and technology compared
to other countries of greater economic development.
As a result, there is a huge gap between the total number of
technological advancements made by Latin American countries and other
parts of the world.
The amount invested in research and development has, on average, been
less than 1% of the GDP of these Latin American countries for the past
decade, and this has caused these countries to fall behind in the tech
In recent years, however, countries such as Brazil, Chile, and Venezuela
have taken initiative in expanding their technology-based industries in
an effort to decrease this gap.
The primary mechanisms for this change have been through the development
of technological infrastructures and the expansion of science and
mathematics based educational programs, and encouraging a healthy
The campaigns by which some of these countries are developing their
high-tech workforces have been effective, as demonstrated by Brazil,
which is recognized as one of the fastest growing economies in the
The rapid expansion of these economies is changing the social landscape
in their respective countries.
In this project, we will analyze the strategies of several Latin
American countries and examine the repercussions of such changes through
both a social and economic perspective.
Solving the technology brain-drain epidemic in the Asia-Pacific region
Joseph Harmon, Jordan Hall, Spencer King, Kristen Leach, Timothy Tam
The “brain-drain” epidemic, which refers to the loss of
valuable, skilled human resources to outside of the country, has become
an increasingly substantial issue in the Asia-Pacific region.
Given the opportunity, a significant number of people from these regions
choose to work in more developed countries such as the United States,
United Kingdom and Australia, mostly swayed by the higher salary and
better quality of life.
This free export of valuable human resources is detrimental because
developing countries invest a large amount of resources in developing a
skilled workforce through universities, scholarships and training
programs, but lose them to already-developed countries.
Our research will focus on examining the efforts taken by various
governments in the Asia-Pacific region to solve the brain-drain
predicament, evaluate the effectiveness of each step carried out and
determine ways to improve them.
Micropayments and the Net
James Fosco, Stacy Kaufman, Dave Luciano, Abhinav Ramani, Long Zou
The emergence of micropayments in the e-commerce market has long been
Defined as any online transaction up to $10.00, micropayments allow for
a la carte service on the web, replacing alternative subscription models
that demand larger upfront payments.
Today, users can pay an upfront cost for certain products, such as an
access pass to paid content.
The system of micropayments seeks to simplify such schemes of e-payment.
However, the reason micropayments have yet to catch on in industry is
because of the various implementation issues.
Micropayment schemes need to make their systems fully reliable, secure,
and easy to use.
Not only is the billing method a technical challenge, but so is the user
Downloading software, authenticating bank accounts, and constantly
monitoring charges make the implementation of micropayment schemes
difficult at best.
Aside from the implementation challenges, more interesting points arise
when assessing the economic and social impact of this concept on the
Most immediately, micropayments facilitate payment to intellectual
property owners who do not get paid when files are shared illegally and
help consumers itemize their purchases.
But what happens when services that are currently free, like digital
newspapers, start charging for their services?
Does this inadvertently cause a change in usage for those who could once
access online material and can no longer due to an additional cost?
Micropayments also pose greater concern for user anonymity.
Security is a major priority, but when companies can keep track of your
personal information, every transaction can make consumers apprehensive
of making online purchases.
Such social and economic dilemmas are what make implementing
micropayment systems complicated.
By assessing the technological, social, and economic challenges in
current micropayment schemes, we hope to present a convincing
justification as to why micropayments are not as beneficial as initially
Session 9. Thursday, June 2, 11:00-12:00, 380-381T
Chanh Nguyen, Taesung Park, Abigail Soong, Kyle Tsai, Wyles Vance
During this past decade, efforts to apply artificial intelligence to
military defense applications have made possible the idea of autonomous
drones in the battle field.
The Autonomous Learning Agent for Decentralized Data and Information
Networks, also known as ALADDIN (2005-2010), was a joint project
developed by BAE Systems, the universities of Oxford, Bristol,
Southampton, and Imperial College London, which aimed to create a
network of information gatherers that allow for a complex algorithm to
decide the best course of action in a variety of situations, ranging
from disaster relief to military operations.
Each machine employed in these unpredictable situations has a command
module that can be considered an agent in the overall response.
A ship’s computers, the control system of an unmanned aerial
vehicle, or an analysis program monitoring online activity are all
examples of these agents.
ALADDIN provides a toolbox of techniques, both for the design of single
agents and for their implementation.
Its algorithms allow the agents to exchange information with each
another and build a bigger picture of the situation than they would have
on their own.
The agents then effectively negotiate with each other to decide the best
course of action for the whole team.
This approach can be particularly useful during a cyber attack or
natural disaster, when communication from a central authority is
ALADDIN brings up several important ethical questions.
We have typically only used machine learning to facilitate
decision-making in non-threatening situations, but ALLADIN proposes that
software can be trusted to do so in high-consequence situations such as
Since software and algorithms are not perfect, there is always risk in
trusting ALLADIN to make life or death decisions.
The decentralized nature of ALADDIN is meant to avoid the vulnerability
of a central backbone, but it is also more difficult to isolate and
solve problems in the system.
Moreover, since a distributed system contains more points of possible
attack, the integrity of the whole system cannot be guaranteed, which
could endanger people relying on the system.
Technology and its impact on the environment
David Johnson, Jackie Liao, Cameron Mansson-Perrone, Jared Poelman, Claudia Roberts, Jacob Speidel
Not too long ago, Mark Bohr, Intel Senior Fellow, and Kaizad Mistry,
Program Manager of Intel’s revolutionary 22nm Transistor
Technology, unveiled the design behind Intel’s new Tri-Gate
transistor, which is claimed to provide “an unprecedented
combination of improved performance and energy efficiency.”
This battle to lengthen battery life
in portable devices, reduce electrical consumption, and reduce the
amount of wasted energy is one that is constantly being fought by tech
However, there are unintended casualties in this never ending battle to
improve energy efficiency.
Intel says it can increase performance of their 22nm processor while
reducing energy leakage by adding a new layer to the silicon.
Instead of silicon dioxide, Intel is making use of a material called
“high-k metal gate” to provides better insulation.
The high-k process, codenamed, Penryn, is expected to use 30 percent
less power, operate 20 percent faster and leak five times less
electricity than a 65nm processor.
However the production of silicon wafers is not without controversy.
The process generates a range of byproducts, some of which are
In Silicon Valley alone, pollution as a result of these hazardous
chemical byproducts has become a major environmental concern.
In 2004 researchers found that manufacturing one desktop computer and
17-inch CRT monitor uses at least 240 kg of fossil fuels, 22 kg of
chemicals and 1,500 kg of water.
Furthermore, in CRT monitors there can be anywhere from 4-8 pounds of
lead protecting the user from radiation.
At the end of the life cycle, this lead poses a serious environmental
risk; it has been estimated that billion pounds of lead from older
computers will be thrown away in the next few years.
At present, there is no Federal mandate to recycle e-waste or to
properly dispose of hazardous substances in electrical and electronic
There have been numerous attempts to develop a Federal law.
However, to date, there is no consensus on a Federal approach.
The limitations of US infrastructure
Ntokozo Bhembe, Charlie Fang, Julian Malinski, James Painter, Spencer Stamats
The United States accounts for much of the innovation in web technology
today, yet the country ranks 16th in the world in percent of residents
with broadband access, trailing such nations as Sweden and Singapore.
This report will examine how such a situation arose and attempt to
explain why a nation that features so prominently in Internet innovation
is lagging in Internet access.
Furthermore, this report will investigate the potential harm that this
is doing to the United States, taking a closer look at how a strong
Internet infrastructure can aid everything from the economy, to
education, to law enforcement.
Finally, we will consider several potential solutions to the problem of
the United States’ lagging Internet proliferation, including some
measures already in effect, such as the Broadband Data Improvement Act
of 2008, the National Broadband Plan, and other proposals on the
Ultimately, our goal is to raise awareness about the U.S.’s limited
broadband reach, and to provide hope that improvements will allow it to
maintain its leadership position in Internet-driven innovation.
Freedom of digital information in the Middle East
Riddhi Mittal, Sami Shad, Suril Shah, Jarred Simmer, Alex Trytko
In light of the recent revolts that have taken place in the Middle East,
and the reactions of the Tunisian and Egyptian governments, strong
questions have been raised about a government’s right to limit
access to digital forms of communication and information.
In these countries, internet access and phone connectivity were largely
cut in attempt to quell the revolts.
But what is the significance of these actions in light of ideas such as
freedom of speech?
What impact do these restrictions have on the residents who have come to
rely on such means of communication?
How could the governments use such forms of communication to forward
their own causes, as some say is seen in the recent revolts on
Israel’s border with Lebanon, Syria and Palestine?
We would like to examine these issues by looking at the actions taken by
the governments, the responses of those affected, the subsequent
responses of those same governments (after the restrictions backfired),
and the reactions of other governments such as our own.
We would also like to use policies from the previous decade as a lens
through which to view these more recent restrictions.
The Middle East has had its share of internet censorship and legal
action due to these issues.
A legal suit, for instance, was brought against the Tunisian Internet
Agency for some of its censorship policies, which was dismissed by the
court, with no explanation.
Situations such as this show that even before these revolts, there
existed a tense climate in the realm of internet freedom.
By looking at the actions taken in the past, and how they led to the
current state of the internet in the Middle East, we will conclude with
an analysis and a portrait of what this may indicate for the coming
years in the region.
Session 10. Thursday, June 2, 1:15-2:15, 380-381T
Jonathan Potter, Ben Roth, Jesse Ruder
How should the group Anonymous be classified?
As defenders of the public good?
The Internet allows people with moral and political agendas to operate
differently than ever before in history.
Some claim that the scale and distributed nature of the Internet has
forced corporations and governments to be more transparent, empowering
people who would have ordinarily been isolated from information they
Others see hacktivist groups as platforms for theft and loss of privacy.
There are clear examples supporting each evaluation, and others that
just don’t make sense.
We will attempt to divide historical examples of hactions into similar
The first encompasses acts of “cosmetic” hactivism intended
solely to spread a political or moral message without causing severe
Examples in this category include the relatively harmless anti-
proliferation vandalism perpetrated by WANK (Worms Against Nuclear
Killers) in 1989 and the defamation of the Kriegsman Fur and Outerwear
company by animal rights activists in 1996.
The second category will include incidents which have more damaging
consequences, such as the attacks on the websites of MasterCard and Visa
by Anonymous in 2010.
Our presentation will use such incidents to investigate and detail these
categories, and the grey area in between, in an attempt to address the
ethical questions surrounding hacktivist behavior and culture.
Ultimately we will try to make a judgment about whether or not
hacktivism is good for the world.
Hiring programmers in light of US immigration law
Vivek Athalye, Carl Case, Andrew Duchi, Daniel Posch, Jordan Potter
The culture and process which surround the hiring of computer
programmers, particularly in the Silicon Valley region, differ sharply
from the culture and process for most jobs.
It is marked by fierce competition for the top talent; constant movement
from one company to the next; and a culture of persistent search to find
the “ninja” programmers who ensure your product’s
These features have their origin in distinctive aspects of the
programming profession: most notably, a lack of adequate labor supply
and orders-of-magnitude variance in programmer productivity.
This hiring culture produces a job market that is not just nationally
integrated but internationally so.
A topic of constant chatter in Silicon Valley is the process of hiring
the best foreign programmers to bring their skills into the US.
To hire internationally, though, companies must navigate immigration law
in addition to usual labor laws.
There is widespread concern in the technology community that US
immigration law is deleterious to the success of American technology
In particular, much of that law treats immigration in general fashion,
and does not distinguish between the immigration of highly skilled
foreign programmers and the less skilled “typical”
Thus there are constant proposals for modifications to the law that
might, for instance, sponsor a large program to “fast-track”
the visa applications of skilled tech workers who already have
Our project seeks to investigate the successes, problems, and dilemmas
that emerge in the complex interaction of the programmer labor market
and US immigration law.
We study the situation as it now stands, and what the government might
do to improve immigration laws.
In particular, we focus on ethical issues that arise when balancing the
often competing goals of furthering American competitiveness and
maintaining a fair and transparent system of immigration.
Multinational software development
Juan Batiz-Benet, Xuwen Cao, Yin Yin Wu
In recent years, countries within the EU have undertaken several
high-visibility projects that involve substantial amounts of software.
These projects have required significant intergovernmental corporation
and pose new questions about ownership, rights, and costs amongst the
participating nation states.
Such projects such as the Large Hadron Collider at CERN, which will be
the focus of this project, benefit from cross collaboration.
The software running the LHC hardware is generally built by the
individual hardware contractors and interfaced by CERN.
Yet the LHC only generates data (faster than it can write it to disk in
its massive data centers), and ships it off to a very large network (the
computing grid) that stores and processes it.
Researchers run their own software on the grid, using the data stored at
particular systems, and shipping their jobs wherever it makes sense.
CERN and the majority of the software they make is open source,
developed by many individuals and organizations all over the world.
Overall, this has maximized the access for researchers around the world
and distributed the effort and cost of writing good software (CERN is an
organization devoted to research, not making money, so open sourcing its
software makes the most sense).
Either way, software has not been the bottleneck for CERN, as the
physical engineering efforts have been much slower and much more costly.
CERN runs GNU/Linux (their own distribution, Science Linux).
Tatiana Iskandar, Lee Semien, Dan Vinegrad
Imagine an Internet in which you cannot watch Netflix because Hulu is
paying your Internet Service Provider (ISP) to block its
This is exactly the situation that many proponents of Net Neutrality are
Net Neutrality aims to maintain the current open state of the Internet
by preventing ISPs from discriminating against certain types of network
In April of last year, the Federal Appeals Court ruled that as long as
the Internet is a service and not a utility, the FCC does not have the
authority to regulate the network management practices of broadband
Still, in December, the FCC made a move towards Net Neutrality when it
approved of a set of regulations for fixed-line providers.
The rules are meant to ensure transparency of network management
practices, prevent blocking of lawful content, and prohibit unreasonable
Republicans in Congress are attempting to overturn these regulations.
In this presentation, we will provide an in-depth survey of the current
and future state of the Net Neutrality debate.
We will close with a defense of our belief that an open Internet must
not prevent access to or prioritize certain types of legal content, does
not price discriminate within a medium, should not block or throttle
traffic to or from particular applications or services, and is
transparent to the average user.
Session 11. Thursday, June 2, 2:15-3:15, 380-381T
Leeroy Jenkins: Free speech in online video games
Rob Blount, Andrea Chavez, Emilio Lopez, Corey Murphey, Chris Torres
In the relatively new industry of online gaming, a new trend has been
observed in the type of speech seen between players.
The increased amount of social interactions that combine the
competitiveness of the games with the anonymity of the Internet lead to
Cases of “griefing,” harassment, and trash-talking on online
gaming are becoming more prevalent and part of the interactive gaming
Our research seeks to answer the questions as to whether this phenomenon
is helpful or harmful in today’s society.
We will seek to find whether the harmful aspects in online games will
outweigh the positive aspects achieved in further social interactions.
We will also explore how the virtual space in games improves the quality
of the gaming experience, but will also consider the negative aspects
that come from the anonymity component of online gaming.
Some interesting components are the demographics involved in the games
The gaming industry has grown such that it has few boundaries in terms
of the people that play them, but this could have negative
repercussions, as you will often end up mixing people in social
interactions that could be detrimental.
Probably one of the most harmful people that could be hurt by the range
of gamers are children, as they are exposed to things like racism,
sexism, and foul language in online gaming.
Lastly, we will consider how to improve the online gaming experience by
looking into possible solutions to the type of negative speech
encountered while playing these games.
We will consider the role of freedom of speech but also offer the
possibility of restricting harmful speech.
If it is decided that some level of censorship is the best route, we
will discuss whether a regulating body is necessary or should the
consequences fall on the individuals.
Extinct: Homo sapiens
Paul Chen, Adriana Diakite, Rachel Fenichel, Nathan Hall-Snyder, Julia Neidert
Imagine a world in which our current human species has evolved into a
We have superior DNA, technologically advanced body components, and
Tracing back to the Enlightenment Era belief in unbounded human
potential, transhumanist theory advocates the inevitability and
beneficence of this outcome.
But is this actually good?
Transhumanists strive to utilize technology to modify and thoroughly
improve Homo sapiens as a species.
They have envisioned various methods by which the Human can be enhanced,
including bioengineering, neuroscience, nanotechnology, mechatronics,
artificial intelligence, and more.
Transhumanists hope not only to enhance the human body, but also to
control the evolutionary progress of our species to eventually transcend
natural limitations, reaching new levels of physical and mental
The difficult ethical questions raised by the visions of transhumanism
must be examined.
Do the benefits of an advanced species outweigh the risks?
How do we resolve issues such as the potential for genetic
discrimination, abuse of technology, and distortion of social order?
How would the politics of the transhumanist movement as it exists impact
the social order of a transhumanist society?
More fundamentally, transhumanism raises the question of what it even
means to be human.
Will transhumanism result in the extinction of humanity, or will it
rather extend humanity beyond its current feeble state?
Economies of virtual worlds
Matthew Chun-Lum, Tiphanie Gammon, Alexander Huang, Junichi Tsutsui
In the virtual worlds of Massively Multiplayer Online (MMO) games, such
as World of Warcraft and IMVU, virtual economies play a tremendous role
Players can use in-game currency to buy upgrades to enhance their
avatars, as well as be used to purchase virtual goods much like real
This in-game currency can be obtained in several ways mattering on the
game: doing quests or missions, killing monsters, creating items, or
even exchange real life currency for in-game currency.
Whatever method the player chooses to earn his or her in-game currency,
though, the player has to work hard to earn enough money to have
As a result, MMOs have their own constantly churning economy.
However with any economy, problems arise with the protection of property
and economic disruption.
First of all, protection of virtual property in these games becomes a
big issue because players actually “work” to earn money to
buy virtual goods.
This work can be equated to real life work because there is a time
investment that is required to earn the money.
Therefore, it is only natural that players feel the need for protection
from hackers and malicious users.
Inflation is another in-game issue.
Because there is virtually no economic bounds, the virtual economy runs
the risk of running into inflation, and so user sold items are sold at a
price a lot higher than originally meant.
It is possible to purchase in-game currency with real world money, which
can also cause inflation.
In some games, this creates a “gray market” where companies
purchase excess virtual currency and reselling it to other users.
As a result of these problems, the producers of the games should control
the economy and provide protections for the players of the games.
“Worse is Better” considered harmful
John Hiesey, Keith Schwarz
One of the core aspects of modern software engineering is the so-called
“worse is better” design strategy, that it is better to
quickly release a flawed program than to meticulously design a correct
This strategy allows software to progress rapidly and is arguably one of
the reasons for the success of the software industry.
However, the worse is better mentality also leads to spectacular
Because software is deployed before it is fully tested, clients often
assume a much greater risk by using the software than one would expect.
Compounding this problem, large software systems are often built of many
If any component contains a bug, the entire system can be at risk.
Security risks in particular are often hidden until someone exploits
them, lulling users into a false sense of security.
In this presentation, we will discuss how public policy can be used to
harness the strengths of the worse is better philosophy while avoiding
In particular, we will investigate how revised software liability laws,
combined with “software insurance,” may allow worse is
better to flourish while also providing better protection against