Shallow Dive into the Ethics of Big Data



In this post, I want to attempt to scratch the surface on the ethical conundrums that pertain to big data. Here, we will focus specifically social media. While the scope of privacy in social media was dubious at best for years, this issue was thrust upon the American public at a grand scale last year with Facebook's congressional hearing. This was likely the first time that American public was hit with the question- what is the data mined from your social media accounts being used to achieve? A recent poll of American adults produced a sample statistic that only 9% of respondents are "very confident" in the ability of social media platforms to protect their data.


However, it does require noting that social media users are responsible for what they share on these sites. My generation is intimately familiar with this issue-college students actively purging their social media accounts prior to job recruitment, high schoolers attempting to hide content from their extended relatives upon receiving their "friend" request. Social media sharing isn't a private conversation with an acquaintance, these pictures and words are being broadcasted on the internet for theoretically everyone to see, and it should be treated as such. Think before you post because deleting the tweet doesn't necessarily mean it is gone forever.

All this being said, a typical social media user should understand that the data and cookies they are willingly sharing is being used by the platform to give them targeted ads. This is a simple concept- a free website brings in revenue from selling advertisements, and the more they know about you, the better an ad can be targeted. Anytime you use a free product, you in turn become the product. This has been the case since the dawn of advertising. The clear next question here; "What else can this data be used to accomplish?" That query will return an enormous list of possibilities. Alternatively a user can ask; "What else should this data be used to accomplish?" The difference between those results is what we are going to explore here.

The knee-jerk method of answering this question is simple. Does it feel creepy? If so, it must be wrong. Not only is this individually subjective, but the method does not give any explanation about what specific element of a project crossed the supposed moral line. Furthermore, it is impossible to effectively articulate that stance based on emotion alone. We need a framework to break down a broad scenario into manageable questions. Below is one such framework, a series of questions to consider before making a moral pro/con stance:



The idea of this framework is to work through each ethical basis and make a determination if a moral issue may be present. Rather than trying to tackle an entire issue, this piecemeal approach may prove useful to forming and articulating an argument for why or why not a proposed project should continue. If an ethical basis is repudiated, perhaps more questions need to be asked. To see this framework in practice, we will break down a real world issue involving monitoring student's social media pages in attempt to prevent school shootings.

Here is a case sometimes used in educational settings to discuss these ethical challenges

"Privacy, Technology, and School Shootings
Should our school purchase facial recognition technology and social media monitoring tools using sentiment analysis to avert school shootings? 
In the wake of recent school shootings that terrified both college campuses and the broader public, some schools and universities are implementing technical measures hoping to avert or reduce such incidents. Companies, in turn, are marketing various services for use in educational settings, including facial recognition technology and social media monitoring tools using sentiment analysis to identify (and then send on to school administrators) student posts on social media that might foreshadow violence. A New York Times article notes that “[m]ore than 100 public school districts and universities … have hired social media monitoring companies over the past five years.” According to the Times, the costs for such services range from a few thousand dollars to tens of thousands annually, and the programs are sometimes implemented by school districts without prior notification to students, parents, or school boards. The social media posts that are monitored and analyzed are public. The monitoring tools use algorithms to analyze the posts. A Wired magazine article tilted “Schools Are Mining Students’ Social Media Posts for Signs of Trouble” cites Amanda Lenhart, a scholar who notes that research has shown “that it’s difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.” She adds that in the case of the new tools being offered to schools and universities, the problem “could be exacerbated by an algorithm that can’t possibly understand the context of what it was seeing.” Others have also expressed concerns about the effectiveness of the monitoring programs and about how they might impact the relationship between students and administrators. Educational organizations, however, are under pressure to show their communities that they are doing all they can to keep their students safe."


With this scenario in mind, we can work through the provided apparatus to assist in making an overall determination.

  1.  Utility- Does the possible good outweigh the harm presented in this case
    • The potential good includes the prevention of mass shootings in a school setting, saving dozens of student's lives.
    • The potential harm includes ineffectively targeting students. What if the potential "problem" students identified by the algorithm are not and will not become future school shooters? What if the school takes action against these innocent students before any wrongdoing occurs? What kind of impact will that have on an innocent student?
  2. Rights- Are the legal or ethical rights of anyone impacted by this action be compromised?
    • Should the school board gain permission from the students prior to using this service? Do the students have a right to know that the school is mining their social media profile?
    • In turn, depending on the student's age, should the parents be informed about this activity?
    • Do the students/parents have the right to opt-out of this activity? What impact would that have on the effectiveness of algorithm? Would someone opting-out be reprimanded by the school?
  3. Justice- Are all parties being treated equally?
    • Are all student's social media mined in the same fashion? If a student does not have social media, are they exonerated from this company's search? Or is facial recognition including them regardless?
    • Is there a bias in this algorithm? Are certain demographics of students given different sentiment analysis scores based on an unfair measure?
    • If students are required to participate, should teachers & school staff be treated in the same fashion?
  4. Virtue- What are the values of the school and school district? Does this mining act in accordance?
    • The school is most likely primarily focused on the well being of the students. Does the possibility of preventing a shooting agree with this value?
    • Does the school value privacy of students outside of the school? Or do they value being involved in all aspects of students' lives?
  5. Common Good- Does using this service potentially help the community as a whole?
    • Does mining this data provide a benefit for the students, the teachers, the families, the faculty, and anyone affected by the school?
    • Are the students that are targeted considered part of the community? Are they receiving care and attention? Or are they regarded in an accusatory fashion without attempt to help work out whatever issues they may be facing. 

This website is not designed as an opinion piece, and I will not assume to know what is right or wrong. Rather, I wanted to explore a potentially beneficial way to approach making ethical judgement calls by utilizing the outlined framework. Everyone forms their own moral compass, so different stakeholders may arrive at contrasting conclusions. The aid this process provides is allowing someone to specifically articulate why a project or scenario contrasts with what they believe to be right. This person can then make a recommendation that perhaps if a specific element of the project is altered, it may prove better ethically founded to proceed.

It is very likely that other more sophisticated frameworks exist. Perhaps some are already in practice where inputs are submitted and a binary Proceed/Stop output is returned. However, in practice these may be seen as roadblocks to business growth. In an industry with virtually unlimited potential and limited regulation, we can only hope that the powers that be are considering not only can we do something, but should we.
















Comments

Popular posts from this blog

Using a Neural Network to Predict Pneumonia From X-Ray Images

Using a Neural Network to Classify Lego Figures

Why are people leaving Illinois?