Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Issues of Social Media: Fake News and Privacy

Paper Type: Free Essay Subject: Media
Wordcount: 5112 words Published: 8th Feb 2020

Reference this

1. Social ties:

Define social capital and describe its relationship with the strength of social ties. Select a technology (TV, cell phone, social media, etc.). How does that technology affect our ability to accumulate and maintain social capital?

One of the main affordances the advent of the Internet and its means of mediated communication has provided users in the ability to both forge new and maintain old social connections with relative ease. In a very general sense, these means have allowed us to share personal news as well as learn what others in your network have been up to, connect with friends and friends of friends, and potentially leverage our connections for mutual benefit. The benefits we may derive from our social networks can be thought of as social capital, which is defined as the resources that are built into our social networks, which can in turn be turned into reciprocal favors or information (Ellison, Vitak, Gray, & Lampe, 2014). We are able to build our social capital by engaging in social interactions online, and by building and maintaining our social networks, we are able to draw from them novel information and other resources.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

When we think about the type of connections we make online, we typically categorize them in terms of family, friends, co-workers, acquaintances, and friends of friend, among many others.  When one looks at these connections as a web, one may see several compact clusters tied together by various outliers. As one talks about social capital and how we build it online, it is also necessary to discuss different forms of social capital, called bridging and bonding social capital (Putnam, 2007). Bridging social capital typically refers to weaker relationships that allow for opportunities for information sharing, while bonding social capital are typically stronger connections that provide us emotional support (Phua, Jin, & Kim, 2017).

When I look at my use of social media through the lens of social capital, I see most of my accounts as means of social and relationship maintenance. For instance, I’m able to send birthday messages to people across the country and overseas to let them know I’m thinking of them, and I’m able to share photos to let people know what I’m up to as well. Social media provides an ideal platform for this type of behavior, in that interactions occurs quickly, potentially with many people, and with low cost to be able to share widely and encourage engagement and reciprocal communications across many channels (Tong & Walther, 2011).

With that being said, I believe there’s a continuum on which different social media sites fall in terms of whether they are more effective for bridging social capital–getting new sources of information–or bonding social capital–cultivating closer relationships with whom we receive and provide stronger emotional bonds. Across the social media accounts I have, I have found that Twitter allows for the most bridging social capital, which research conducted by Phua, Jin & Kim has backed up (2017). Their research into four social networking sites (Facebook, Twitter, Instagram, and Snapchat) and their influence on bridging and bonding social capital noted that frequent users of Twitter had the highest level of bridging social capital, followed by Instagram, Facebook, and Snapchat. In addition, they had found that users of Snapchat reported the highest bonding social capital, followed by Facebook, Instagram, and Twitter (Phua, Jin, & Kim, 2017). Overall, the results of their study suggested that people can develop positive social network site relationships and leverage the benefits through selective and simultaneous use of several social network sites in order to increase both bridging and bonding online social capital.

In looking at my usage of Twitter, Facebook, and Instagram (I don’t use Snapchat), it does line up with the research above. I use Twitter specifically to follow accounts that provide breaking news and information I don’t necessarily have or want to have access to on the other sites, as opposed to connecting with friends, which seems to align with the concept of bridging social capital. One of the benefits of Twitter is that you can follow people from broad swaths of the public such as celebrities, news organizations, and politicians and there is no expectation or requirement that you follow your followers, or vice versa. 

In terms of Facebook, however, most of the connections I have–either close friends and family or acquaintances and friends of friends–are extensions of connections I’ve built offline. While I have both bridging and bonding ties on Facebook, most of my behavior and my interactions revolve around subtle relationship maintenance “social grooming” tasks (saying “Happy Birthday” or commenting on others’ posts, etc.) that signal the importance I place on the relationship (Donath, 2007) and in some respects serve as extensions of offline interactions. The findings of research conducted by Ellison et al point to the importance of these relationship maintenance tasks, as social capital isn’t derived from having connections but rather the effort one exerts in maintaining those connections (2014).

While the social networks I note above are more of a personal nature, it is also interesting to look at social network sites that cater to different categories of contacts, such as LinkedIn, which generally connects professionals with one another. Though some connections overlap between purely professional social network sites and more personal ones, the connections on LinkedIn are generally acquaintances and people with whom one has infrequent contact, and it’s helpful to examine what differences there are relative to social capital. Some of the expected relationship maintenance behaviors one engages in on LinkedIn highlight reciprocity and how bridging social capital can be converted to favors (Ellison et al, 2014). For example, if someone recommends another, there is some expectation that the recommendation is returned. Another aspect that shows bridging social capital and one of the main affordances of LinkedIn is that people may find leads for new employment from people they barely know or are merely acquaintances. This is supported by research from the book “Getting a Job: A Study of Contacts and Careers” by Mark Granovetter, which showed that people were more likely to have found their jobs from occasional or infrequent contacts rather than close friends (Parnes, 1976).

Generally speaking, my usage of social network sites shows that I gain more bridging social capital from sites like Twitter and LinkedIn, from which I gain information, and more bonding social capital through Facebook and Instagram, where I foster stronger relationships and offer and receive emotional support and kinship. I see the relationships one develops and maintains online to be meaningful and in-depth, and I approach my interactions online with the idea explained by Green and Clark of perceived reality of online interactions: if one thinks they can get something out of it, they will likely behave in ways that will allow them to get something out of it, and the chances will increase that they will (2015).


  • Donath, J. (2007). Signals in Social Supernets. Journal of Computer-Mediated Communication, 13(1), 231-251. doi:10.1111/j.1083-6101.2007.00394.x
  • Ellison, N. B., Vitak, J., Gray, R., & Lampe, C. (2014). Cultivating Social Resources on Social Network Sites: Facebook Relationship Maintenance Behaviors and Their Role in Social Capital Processes. Journal of Computer-Mediated Communication, 19(4), 855-870. doi:10.1111/jcc4.12078
  • Green, M. C., & Clark, J. L. (2015). Real or Ersatz? Determinants of Benefits and Costs of Online Social Interactions. The Handbook of the Psychology of Communication Technology, 247-269. doi:10.1002/9781118426456.ch11
  • Parnes, H. S. (1976). Getting a Job: A Study of Contacts and Careers. [Review of the book Getting a Job: A Study of Contacts and Careers, by M.S. Granovetter]. ILR Review, 29(2), 305-307. Retrieved May 10, 2019.
  • Phua, J., Jin, S. V., & Kim, J. (. (2017). Uses and gratifications of social networking sites for bridging and bonding social capital: A comparison of Facebook, Twitter, Instagram, and Snapchat. Computers in Human Behavior, 72, 115-122. doi:10.1016/j.chb.2017.02.041
  • Putnam, R. D. (2007). Bowling alone: The collapse and revival of American community. New York, NY: Simon & Schuster.
  • Tong, S., & Walther, J. B. (2011). Relational maintenance and CMC. In K. B.Wright and L. M.Webb (Eds.), Computer-mediated communication in personal relationships (pp. 98–118). New York: Peter Lang Publishing.

2. Fake news:

Select a large technological company (Facebook, Twitter, Google, YouTube, etc.). Imagine that you were in charge of designing the company’s approach to dealing with the spread of misinformation. How would you go about it? What are the benefits and drawbacks of the solution you have proposed?

What is Fake News?

Everyday it seems we’re faced with someone decrying a news story as “fake news,” but what does that really mean? The definition has morphed over the years, from satirical commentary and tabloid journalism to content that resembles traditional news formats and taps into existing public perception with the intention to deliberately misinform people (Waisbord, 2018). Another view of fake news, beyond news with the intent to misinform, is news with which one disagrees, or news with an agenda with which one disagrees, and the terms “fake news” is used as an epithet (Gelfert, 2018).

 While the proliferation of the term “fake news” is relatively current, news meant to misinform or news-like fiction meant to be deceitful aren’t new. Similarly, the use of propaganda to persuade the public or distortion in communications has a long history (Waisbord, 2018). So, while distorted news isn’t new, the difference now is how fast and through how many channels it can travel. In addition, the sheer amount of content and the people who create it outside the realm of traditional news producers and curators makes it difficult to vet the information, creating an opening for various actors to swarm social media with disinformation meant to stoke confusion (Waisbord, 2018).

What are the Challenges in Combating it?

 There are multiple challenges to stemming the flow of misinformation online.

  1. On social media, content can be shared to many people without having to be filtered through any third party fact-checking or editing, and a person with no real track record or authority has access to the same number of people as established news networks (Allcott & Gentzkow, 2017).
  2. Furthermore, an important aspect is the sheer scale and volume of news stories, and their resultant interactions, we would have to contend with. According to research conducted by Allcott and Gentzkow on social media and fake news in 2016 which used a database of 156 election-related news stories that were deemed false, 115 favored Donald Trump and were shared 30 million times, while 41 favored Hillary Clinton and were shared 7.6 million times (2017)
  3. False news has also been shown to move across networks faster and more pervasively than real news. Research conducted of a broad sample from over 126,000 rumor cascades from 2006 to 2017, false rumors diffused on Twitter father, faster, deeper, and more broadly than the true rumors, with false political rumors being the fastest (Vosoughi, Roy, & Aral, 2018)
  4. There have been arguments that we live in a post-truth world, where rather than seeing journalism as a consensus builder, news and information now go to communities, with different concepts of truth, rather than the public as a whole (Waisbord, 2018). The concept gained such a foothold that it was named the word of the year by the Oxford Dictionaries, being defined as objective facts being less influential than appeals to emotion and personal beliefs in shaping public opinion (Wang, 2016).

My Proposed Solution

 If I were to lead Facebook’s efforts to stem the spread of misinformation on its network, there are several variables that would guide my decision making. First, my solution would need to account for the sheer volume of content, coming from millions of content producers everyday, with information that perhaps has not gone through any third-party source for editorial judgment or fact-checking, subjecting the content to potentially being poorly organized, out of date, or inaccurate (Metzger & Flanagin, 2015). Second, my solution would need to help readers with the subjective concept of content credibility to counter the volume of content, the sources of unfiltered information, and the idea of post-truth. Third, my solution would need to account for how users share and interact with content as well as provide context and knowledge that helps users make decisions.

My solution is as follows:

  1. Define a model for what fake news is and is not to serve as a guiding principle. As an example, for Allcott and Gentzkow’s research, they  exclude unintentional reporting mistakes, rumors independent of actual news articles, conspiracy theories, and satire, among others (2017). Having this in place serves as a framework.
  2. Considering all the data I would have at my disposal, I would conduct research on past links from external sources that have been shared through Facebook in order to determine where they came from and how they were shared.
  3. With a random sampling of these links, I would pass them through independent fact-checking sites like snopes.com and politifact.com (as well as a team of editors to account for the volume of work) in order to determine long-term, consistent reliability and to assign credibility in order to determine a baseline for a credibility algorithm.
  4. I would design a credibility algorithm, taking into quantifiable account source, author, and message cues (Metzger & Flanagin, 2015), to assign a score to external third-party links from news sources. (To account for video, I may incorporate a technology to dictate spoken speech to text.)
  5. I would add social elements, such as allowing people to review/grade on the credibility score from above. Also, I’d incorporate cues such as who interacted with an article and how (likes, shares, etc.) as well as their independent review of source credibility.
  6. I would add a nudge when people want to share an article, including the credibility score and friends’ opinions, with a potential warning when something is on the lower end of a credibility scale. If the nudge works, it may potentially reduce a non-credible story’s reach across the network.

Potential Benefits and Drawbacks 

My solution’s credibility score is based off of years of aggregated data on reviews of content that provide a baseline to predict the credibility of new content, using warranting theory and signaling theory to help influence a users’ view of what is and is not a credible source because the cues are then hard to fake. As described by Metzger and Flanagin, the warranting principle suggests that user-generated ratings and reviews might be viewed as credible in that they cannot be easily manipulated by the source, and signaling theory suggests that aggregated, user-provided content signals credibility in that it is difficult for one person to manipulate an aggregate (2015). My solution also includes an aspect of social influence, as including such an element helps the user gain information when he or she is uncertain of their own perceptions (Metzger & Flanagin, 2015)

My solution may have inherent limitations. When creating a mechanism to aid in evaluating source and content credibility, while it might be able to account for the more quantifiable aspects such as site or source cues, author cues, and message cues, it may not be able to account for receiver characteristics, which are dependent and vary for each individual (Metzger & Flanagin, 2015). Thus, my credibility score can only go so far. In addition, while the algorithm may make recommendations quickly and efficiently and have the appearance of neutrality, it is only as strong as the information that went into the developing it, the baseline derived from source research. If the source research is somehow biased, it would mean that any resultant credibility score can inherit that bias  (Rosenblat, Kneese, & Boyd, 2014).

 Additional drawbacks may include implementing the research, which is likely to require a large employee commitment, as well as the need for future monitoring and evaluation to make sure the solution is meeting its intended goal.


  • Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. doi:10.3386/w23089
  • Gelfert, A. (2018). Fake News: A Definition. Informal Logic, 38(1), 84-117. doi:10.22329/il.v38i1.5068
  • Metzger, M. J., & Flanagin, A. J. (2015). Psychological Approaches to Credibility Assessment Online. The Handbook of the Psychology of Communication Technology, 445-466. doi:10.1002/9781118426456.ch20
  • Rosenblat, A., Kneese, T., & Boyd, D. (2014). Workshop Primer: Algorithmic Accountability. SSRN Electronic Journal. doi:10.2139/ssrn.2535540
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. doi:10.1126/science.aap9559
  • Waisbord, S. (2018). Truth is What Happens to News. Journalism Studies, 19(13), 1866-1878. doi:10.1080/1461670x.2018.1492881
  • Wang, A. B. (2016, November 16). ‘Post-truth’ named 2016 word of the year by Oxford Dictionaries. Retrieved from https://www.washingtonpost.com/news/the-fix/wp/2016/11/16/post-truth-named-2016-word-of-the-year-by-oxford-dictionaries/?utm_term=.d443e456a5bb


4. Privacy:

Define information privacy. What are the 3 most important threats to individual information privacy today? How should we go about addressing those threats?

In the relatively short history of the Internet, we’ve seen several economic underpinnings that drove the business of the Internet. In the late 90s into the beginning of the 2000s, web traffic and “eyeballs” were the magic data bullets that drove businesses forward. As the technology advanced and web sites were able to track our movements online with greater precision and more nuanced data, the information they were able to glean from their users became the data that drove targeted advertisements that financed their services (Camenisch, 2012).

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

The way we interact with the internet has changed with the advent of social networks and social networking sites that promote sharing of information across the networks and with connections. At this point, any act that we do online leaves a digital trace that reveals information–such as interests, traits, and beliefs–to other people, to businesses, and to governments, and along with this data collection has come advancements in being able to analyze and make inferences from such information  (Acquisti, Brandimarte, & Loewenstein, 2015).

While this sharing is necessary for a social network to grow and be effective, it does blur a line over what is public and what is private, and people have to think critically about how to manage their public and private information while sharing information in social networking environments  (Papacharissi & Gibson, 2011). Considering all the information that is available about us and that we have provided online, along with the consistent data monitoring and storage, it is no wonder there has been growing concern about what happens to that information and the privacy controls in place to make sure it’s safe.

Therein lies the crux of what information privacy is. If the idea of privacy is the right to be left alone and to have some control over what information makes it into a public sphere, then one can extrapolate the idea of information privacy to be the ability of someone to control through others information about themselves (Burgoon, et al., 1989).

Both the users of online and mobile applications as well as the firms that provide these services benefit from the data we share. However, the value of our information has led to security challenges, threats, and vulnerabilities.

Where are the Threats

 Considering how much value can be derived from people’s data, it follows that people with nefarious purposes would try to track it down. The statistics show that 2017 was considered a record year for data breaches. Not so surprisingly, the previous record year was 2016, and the one before that was 2015, extending all the way to 2013 (Dodt, 2018). Given such a prevalence of these breaches, there has been much written about the concept of data privacy and security along with the multitude of threats people face protecting their privacy. While the following list is not extensive, generally speaking, these three areas pose among the biggest threat to information privacy.

Data security: There are many touch points for our data to work through before it is at rest, and any vulnerabilities along the chain may be exploitable by hackers. In addition, we have to consider how secure the devices are we use to access the internet and mobile applications, and each website and application may have different security protocols they adhere to. The security of our data faces numerous threats vulnerabilities, especially on mobile platforms (Khan, Abbas, & Al-Muhtadi, 2015):

●       Physical Threats (Bluetooth, lost or stolen devices)

●       Application-based threats (Spyware and malware)

●       Network-based threats (Denial of Service Attack, WiFi sniffing)

●       Web-based threats (phishing scams)

What is collected, how is it used, and who controls it: The sheer amount of data collected on us, whether it’s information that we provide through likes, clicks, and shares or whether it’s technical data that is collected through our web and application usage, helps finance the services we use online, and our data has become the new currency on the internet (Camenisch, 2012). However, do we really know what is collected about us and how the data are mined by interested third parties to provide us services? The opportunity for our data to be misused–for under-the-radar manipulation or influence campaigns, for instance–is alarming (Acquisti, Brandimarte, & Loewenstein, 2015). Furthermore, the U.S. does not really have a strong regulatory framework to guide data collection, usage, and sharing, instead the U.S. policy leaves it with organizations and assumes that businesses disclose but does not adjust or restrict data gathering and distribution (Papacharissi & Gibson, 2011; Dodt, 2018).

People: In the case of information privacy, we can be our own worst enemy. First, do we have the time and patience to read through privacy policies and terms of services when we use applications and websites to know what information they gather from us and how they intend to use it? Second, do we have the web savvy to be able to recognize malicious events like phishing scams and malicious software hidden in apparently innocent email attachments? It is usually users who give hackers access to a network for them to be able to exploit its vulnerabilities (Kam, 2015). Third, people’s actions at times don’t line up with their perceptions or beliefs, as exemplified in the concept of the privacy paradox, where our attitudes about privacy don’t actually line up with our behavior (Acquisti, Brandimarte, & Loewenstein, 2015).

How can we Tackle Them

As noted above, as much has been written about information privacy threats, just as much has been written about ways to address some of these concerns. The following list may not be extensive, but they point to some ways we may be able to deal with the threats noted above.

Data Security: As defense against the multiple threats our data face as it traverses the infrastructure of the internet and mobile environments, Khan, Abbas, and Al-Muhtadi point to a number of defensive mechanisms one can employ across the various security touch points, especially in the mobile world, including application stores like Apple’s App Store; OS and Device makers, such as Apple and Google; application developers, non-recommended applications; and biometric approaches (2015). Some of the defensive mechanisms include incorporating data security and privacy measures into the design and manufacture of devices and operating systems, removing applications from app stores that improperly handle privacy or don’t align with data security protocols, and including biometric mechanisms to provide security in enrollment and authentication processes (Khan, Abbas, & Al-Muhtadi, 2015).

What is collected, how is it used, and who controls it: In terms of data collection, one way to insure against potential security threats is to make sure that websites and applications gather only the information they need in order to operate their business. If they require additional data for advertising or other creative endeavors, they would first need to get consent from users (Dodt, 2018). There must also be a regulatory framework in place that allows people more control over their data and lets them decide for themselves what balance between public and private they are comfortable with (Papacharissi & Gibson, 2011). The European Union’s General Data Protection Regulation (“GDPR”), which began enforcement in 2018, and the California Consumer Privacy Act (“CCPA”), which goes into effect in 2020, aim to give people broader control over their data, offering the right of access and greater control over how their information is shared or sold (“CCPA”, 2018)

People: In order to combat the threat we pose to our own data security and privacy, people may need to practice greater privacy and security literacy by following some of the suggestions noted here: https://www.consumerreports.org/privacy/66-ways-to-protect-your-privacy-right-now/. In addition, web applications such as Clarip use artificial intelligence and algorithms to provide consumers with simplified information on who collects and shares their information and what information they have. However, information and empowerment may not always be enough, so developing policy along the lines noted above that aim to provide better balance between people who provide the data and the firms who process and use it may help people navigate complex privacy issues and minimize the effort people need to expend to make sense of it (Acquisti, Brandimarte, & Loewenstein, 2015).


  • Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and Human Behavior in the Information Age*. The Cambridge Handbook of Consumer Privacy, 184-197. doi:10.1017/9781316831960.010
  • Burgoon, J. K., Parrott, R., Poire, B. A., Kelley, D. L., Walther, J. B., & Perry, D. (1989). Maintaining and Restoring Privacy through Communication in Different Types of Relationships. Journal of Social and Personal Relationships, 6(2), 131-158. doi:10.1177/026540758900600201
  • Camenisch, J. (2012). Information privacy?! Computer Networks, 56(18), 3834-3848. doi:10.1016/j.comnet.2012.10.012
  • CCPA, face to face with the GDPR: An in depth comparative analysis. (2018, November 28). Retrieved from https://fpf.org/2018/11/28/fpf-and-dataguidance-comparison-guide-gdpr-vs-ccpa/
  • Dodt, C. (2018, August 23). The 10 Largest Privacy Threats in 2018. Retrieved from https://resources.infosecinstitute.com/the-10-largest-privacy-threats-in-2018/#gref
  • Kam, R. (2015, October 22). The Biggest Threat To Data Security? Humans, Of Course. Retrieved from https://iapp.org/news/a/the-biggest-threat-to-data-security-humans-of-course/
  • Khan, J., Abbas, H., & Al-Muhtadi, J. (2015). Survey on Mobile Users Data Privacy Threats and Defense Mechanisms. Procedia Computer Science, 56, 376-383. doi:10.1016/j.procs.2015.07.223
  • Papacharissi, Z., & Gibson, P. L. (2011). Fifteen Minutes of Privacy: Privacy, Sociality, and Publicity on Social Network Sites. Privacy Online, 75-89. doi:10.1007/978-3-642-21521-6_7


Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: