The State of Social Media Infrastructure: The Security Threats to Your Social Infrastructure

White Paper

Social media threats are on the rise. The explosive growth of this new digital communications platform has created opportunity for hackers and fraudsters to target big brands and exploit the upswing in social media marketing investment.

This report analyses the findings of a study conducted on the scale and scope of social media threats to the enterprise and its community, and illustrates the threats that you need to address.

Get the download

Below is an excerpt of "The State of Social Media Infrastructure: The Security Threats to Your Social Infrastructure". To get your free download, and unlimited access to the whole of bizibl.com, simply log in or join free.

download

Introduction

As social media has expanded as a marketing, sales, recruiting, and customer service tool, enterprise social media infrastructure has become a bigger target for hackers, scammers, and other malicious actors. In the first report of this three part series, we examined the scope of social media infrastructure by analyzing the Fortune 100’s social media accounts, applications, and account activity. We found:

  • The average Fortune 100 brand has 320 social media accounts.
  • The Fortune 100 use 2100 unique publishing tools and applications and each company averages 13 distinct publishing tools.
  • Only 30% of social publishing done by the Fortune 100 on its accounts is through a professional tool.

In this second of three reports by Nexgate, we investigate the security threats plaguing the infrastructure of the Fortune 100 companies. As national brands with millions followers, the Fortune 100 serve as a microcosm for the tug-of-war between the widespread adoption of this new social communications medium and the widely unresolved security threats to its infrastructure.

We began by scanning the top social media networks and channels to uncover the full infrastructure of these enterprises, including the social accounts and corresponding applications and activity occurring on those accounts. We found that security threats to that infrastructure are growing in three areas:

  1. Unauthorized Social Media Account – These are accounts that are created without the explicit permission and/or knowledge of the head of social media. They may be created by employees and fans, or by those that seek to create negative conversations about the brand (i.e. a protest account) or defraud or harm it or its customers (i.e. a fraudulent account).
  2. Content Threats – Malware links, phishing lures, spam pornography, hate speech and other dangerous content isn’t just sent in email; bad content on the pages of branded social media has increased significantly as bad actors look to leverage and tarnish the popularity of a brand on its own social media properties.
  3. Account Hijack – Social media account hacks and hijacks of major brands occur nearly every day, as hackers are increasingly recognizing the opportunity in social media to steal customer information, distribute malware, embarrass brands, or engage in other malicious activities.

The results of the study indicate that, on average, 40% of all Facebook accounts and 20% of all Twitter accounts claiming to represent a brand are now unauthorized. Social spam is similarly increasing, with 658% growth from mid-2013, when Nexgate’s State of Social Media Spam Report was released. Hijacks of social accounts have become so commonplace that there is now a set of historical patterns that can be used to determine whether or not a hijack has occurred.

This study is intended to serve as a tool for better understanding security threats that companies face in the age of social, and how they can better combat those threats given social media’s unique environment and security requirements.

Research Methodology

From July 2013 to June 2014, Nexgate researched the accounts created and run by each Fortune 100 company on each top social network, although more focus was placed on Facebook, Twitter, and YouTube. After finding roughly 32,000 accounts run by those 100 companies, Nexgate explored the activity on those accounts represented by more than 60 million pieces of content and 2,100 unique applications used by those brands to communicate. That brand-generated content resulted in nearly 1 billion pieces of engagement such as likes, shares, followers, subscribers, etc.. The accounts, content, public communications, applications used, and social metadata, (e.g. time of the post) were collected using Nexgate’s patent-pending technology using approved API integrations with the social media platform’s public APIs. Nexgate’s technology, expert systems, and researchers applied unique contextual, linguistic, behavioral, application, and content classifiers to this data in order to accurately find company accounts, activity, and the related risks to them or on them.

Summary of Results

Unauthorized Accounts

  • On average, 40% of Facebook accounts claiming to represent a Fortune 100 brand are unauthorized.
  • 20% of Twitter accounts posing as a Fortune 100 brand are unauthorized.
  • Unauthorized accounts advertising giveaways of “free” gifts or membership points are amongst the most common. We found up to 330 such accounts for a single brand.

Content-Based Threats

A total of 1.8 million security and inappropriate content incidents were found across the 32,000 accounts studied.

  • 968,396 those messages contain profanity and adult language.
  • 51,073 of those messages contain hate speech or personal threats.
  • 28,540 of those messages contain bullying.
  • 162,600 of those messages contain spam.

URLs in Social Media Posts

  • 99% of malicious URLs lead to installing malware or phishing attacks.
  • 72% of URLs leading to compromised sites are found on child-targeted accounts, such as those of cartoon shows.
  • 60% of URLs leading to phishing attacks are found on retail and news accounts.
  • 85% of URLs leading to malware links target financial and entertainment accounts.
  • 91% of URLs promoting the ability to hack are found on news and entertainment accounts.

Account Hijacks

The average firm exhibited 2.29 account hijack indicators (e.g. malware links posted by brand account managers).

Unauthorized Accounts

Definitions:

  • Branded Account: An account officially owned and operated by a brand.
  • Unauthorized Account: Any social media account not officially created by the brand itself, whether it be with good intentions, such as employees or fans and followers, or bad ones, such as a fraudulent or protest account.
  • Fraudulent Account: A type of unauthorized account created to mimic a legitimate account for the purposes of embarrassing the brand, selling a bogus product, distributing malware, broadcasting false information, or defrauding a brand’s customers or stealing their account credentials or personal information.
  • Protest Account: A type of unauthorized account created with the purpose of generating negative conversation around a brand.

Unauthorized accounts are problematic because they misrepresent the brand, mislead customers, and redirect marketing dollars towards promoting unofficial pages and can embarrass the brand, sell bogus product, distribute malware, broadcast false information, and steal account credentials or personal or financial information.

Nexgate’s study of the Fortune 100’s social infrastructure suggests that unauthorized accounts are an extensive issue across the major social networks. On average, 40% of Facebook accounts and 20% of all Twitter accounts claiming to represent a Fortune 100 brand are unauthorized. Additionally, we found that up to 330 unaffiliated accounts advertise giveaways of “free” gifts or points per brand in the Fortune 100. These unaffiliated promotional accounts degrade the performance of marketing programs, and displace otherwise potential long-term customers.

Unauthorized Account Example – Amazon

The page below was created using the Amazon logo and claims to be a “Deals and Promotions” page.

Want more like this?

Want more like this?

Insight delivered to your inbox

Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

side image splash

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

A closer look reveals that the page is unauthorized. The links for the deals posted on the page appear to be random items from unknown sellers, thus opening up the possibility for fraud or harm to the buyer. Moreover, the many wall posts are completely unrelated to Amazon sales, and contain offensive jokes and other off-brand messaging. Accounts like these redirect followers, damage brand image, and take advantage of the hard-earned trust built by Amazon with its customers.

Comparing Unauthorized Accounts Across Verticals

We found that Fortune 100 companies in the financial category have the highest number of unauthorized accounts on both Facebook and Twitter, where unauthorized accounts make up 55% and 25% of accounts, respectively. Unauthorized accounts make up 35% and 10% of news accounts across Facebook and Twitter, respectively, and 25% and 15% of Facebook and Twitter accounts owned by entertainment companies, respectively.

Protest Account Examples – Chase Bank and Shell

In addition to accounts that attempt to pose as a brand, we discovered accounts that protest against a brand. 20% of all accounts affiliated with Fortune 100 companies are protest accounts. These accounts damage social media ROI because they undermine the resources dedicated to building up the brand, and distract from the conversation that the brand is trying to create with its audience. Protest accounts provide a platform with a potentially very wide reach for a small number of people to broadcast negative, biased, and potentially false information about the brand. Indeed, depending on how well the accounts are executed, a single damaging message can reach millions of customers within a matter of seconds.

While protest accounts are bound to occur and people certainly have the right to create them, it is important for brands to monitor them to ensure they do not spiral out of control, lead to false reports, abuse customers, commit fraud, or deface the brand - all of which can have a damaging impact.

Shell famously faced a problem in 2012 when an unauthorized website and Twitter account were created by Greenpeace to drum up negative publicity about the company’s drilling practices in the Arctic. Much of the public – including Shell’s customers and even some prominent journalists – was duped into thinking that Shell had had a social media “meltdown” when, in reality, the content in question had been churned out by a third-party masquerading as Shell to protest the company’s business practices.

Greenpeace’s unauthorized Shell Twitter account demonstrates the importance of monitoring for unauthorized and unofficial accounts. Twitter’s content policy states that accounts impersonating a brand, violating a trademark, or copyright infringement are all subject to removal, but without knowing that these accounts exist, reporting them to Twitter and taking action to protect brand image becomes impossible.

Free Giveaway Account Examples - Xbox and Spotify

Nexgate’s study found that up to 330 unofficial accounts per brand claim to give away free items. The following Facebook page allegedly gives away “Free XBOX Live Microsoft Points”:

Accounts advertising free or discounted products are problematic not only in terms of the company’s bottom line, but also in terms of brand reputation. These accounts can be used to give away or sell unauthorized licenses, steal customer personally identifiable information, redirect sales from the brand, and weaken official marketing investments for the fraudsters’ own gain. The volume of these unauthorized accounts makes it hard for customers to find legitimate brand accounts. When users search for a brand, they are presented with dozens or even hundreds of results, many of which are unauthorized accounts. Many users will end up viewing these accounts and may never view the legitimate accounts. In short, unauthorized accounts steal social media audience, undermining marketing investments in social media.

Below is another example – an offer for “low-cost Spotify Premium codes.” The page looks like a real Spotify page and has 11,000 likes; in reality, however, it is unaffiliated with Spotify. Instead, it siphons business from Spotify and warps the marketing message of the real brand.

Tips for Identifying Unauthorized Accounts

Several features distinguish an official account from a typical unauthorized or protest account:

  • The number of “likes” on an official account is typically much higher.
  • Generally, there is increased recent activity on an official page.
  • There is often overall more activity on an official page.

Although users well versed in social media may be able to distinguish between unauthorized and official accounts, quickly taking down these accounts requires automated mechanisms. The average Fortune 100 brand has 320 social media accounts, 96 of which are unauthorized. With new accounts – both legitimate and unauthorized – constantly appearing and disappearing, it is virtually impossible monitor the state of these accounts.

The sheer volume and dynamics of branded social media accounts means that, inevitably, companies are unaware of unauthorized accounts affiliated with their brand. Indeed, it would be more than a full time job to simply to handle inventorying these accounts if performed manually. Yet without at least being aware of the accounts, there is no way that brands can take the action necessary to remove them and these accounts will continue to churn out bad content.

Content Threat Types

Social Spam – A malicious or otherwise unwanted message posted by an individual "spammer" to multiple social accounts. Social spam is typically delivered from fake accounts by criminals, hackers, con artists and others, but it may also be delivered from legitimate accounts. It includes variety of unwanted content types including: malware links, phishing links, scams (e.g. “work from home”), advertising, etc.

Malware : A social media message that contains a link to a website that attempts to trick users into downloading malware. Malware content is typically included within social spam although it can target specific individuals as well.

Phishing: A social media message that contains a link to a website that attempts to trick users into providing account credentials. A phishing message is typically designed to mimic an official communication from a social media platform or account owner. For example, a phishing message may request that the user click a bogus link to “authorize” or confirm their bank accounts by providing their credentials. However, the bogus link leads to a hacker owned sited that is designed to steal credentials. Phishing content is typically included within social spam, although it can target specific individuals as well.

Con Schemes: A social media spam message that promises to help people “Make Money Working from Home”, “Lose Weight Fast”, etc. The messages include a link to a Web site or instructions to contact the spammer directly to learn more. These schemes usually require up-front payment with longer term prospects for results. They often originate from real social media accounts and are linked to legal network marketing organizations, but the majority of people responding to these messages “Lose Money Fast”.

Content-based threats are becoming increasingly prevalent. Indeed, there has been a 658% increase in social spam since mid-2013. 968,396 total messages across Fortune 100 accounts contain profanity and adult language. 51,073 messages contain hate speech or personal threats. 28,540 of those messages contain bullying.

Spam, phishing, malware, and other inappropriate content not only put the audience and community managers at risk, but they create a self-perpetuating cycle of companies pouring resources into generating visibility and audience engagement, only to have that visibility co-opted by bad actors. The enterprise is, in effect, paying for bad actors to defraud their audiences. Similarly, racism, hate, and pornography also distract from the conversations between companies and their audiences, warp social media marketing messages and, consequently, damage social media ROI.

Social spam includes the same content seen in email spam, but the opportunity to distribute that content is far more efficient. For example, e-mail spam is a one-to-one interaction that requires significant time and effort. Social spam, on the other hand, gets distributed to hundreds, thousands, and even millions of people with just one post. Moreover, automated social media spam filtering controls are not widely applied and manual moderation does not scale for organizations receiving thousands of posts across dozens of accounts. This means that social spam has given bad actors a greater possibility for high impact with less effort.

Bad actors in social media users are motivated by a wide variety of reasons, but the most common driver at the end of the day is financial gain. As the examples below illustrate, scammers use everything from links advertising to “hack any Facebook account password…” to free product offers to get users to provide personal financial information.

Example: Facebook Account Hacking Scam

Consider the comment below on the official “One Tree Hill” Facebook page. The link claims the user can “hack any Facebook account password online for free…”

A user looking to hack another’s profile to obtain their password would begin by clicking on the link. For demonstration purposes, we will attempt to get Joe Pea-knut’s password.

Clicking on the link takes the user to the following page:

After typing a link to a Facebook account profile into the input field, the user is provided with the following sequence of images:

The profile image is shown in a “status bar” sequence, which is a tactic to convince the user that the password is actually being compromised.

After the sequence is completed, the user sees the following page:

The above message claims that the account has been successfully compromised, but that to retrieve the information, the user needs create a “Members Panel Account.” To create a sense of legitimacy, the message cautions the user to keep his or her account details safe because “resetting Panel details or creating a new account on the same IP address is not possible due to security reasons.” After clicking “Generate Members Panel Account,” the user is given their supposed account details below.

Here, yet another message appears warning the user to keep his or her information safe. Clicking “Go to Members Panel” prompts the user to enter their credentials and takes them to the following page:

This page appears credible, complete with hacked profile image, name, status, and a “Download” button. However, the user is asked to share the link provided with five other people before the “Download” button can be unlocked. Thus, the unknowing user is effectively asked to distribute spam to perpetuate this scam.

If the “Download” button is clicked, a pop-up window is displayed.

Want more like this?

Want more like this?

Insight delivered to your inbox

Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

side image splash

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

This page claims that the file the user is about to download passes the virus scan of 16 of the most popular virus scanners. Once the “Download File” button has been clicked, the user is asked to take a brief survey of three yes/no questions. Once completed, the following page is shown:

This screen attempts to convince the user that a $100 American Express gift card is available, if he or she inputs his or her mailing information. If that information is provided, the scammer has enough data at this point to capture your online identity. However, the scammer will often try to go beyond this basic level of information gathering. In this case, if the user does input his or her contact information, the site asks for his or her phone number, email address, and date of birth.

After providing this information, the user is asked to fill out a detailed auto insurance survey (below), which requires divulging more personal information such as the make and model of any vehicles owned, where they are parked, and if they have security systems. These questions suggest that the creators of the survey are attempting to make their scheme appear more realistic.

The auto insurance survey also asks for the user’s license status, education, GPA, and a variety of other personal questions. These could conceivably provide more than enough personal information for identify theft and fraud.

Notice the label next to the email input that reads, “No Spam.” Even this late in the scheme, the scammer is still actively trying to gain the victim’s trust. After submitting the information, the following page is displayed.

The only link that works on this page is the last one. This link brings the user to the official esurance website. The user information is not saved during this transfer, meaning that the information provided on the form is not transferred to esurance, and the user must re-enter his or her data on the website.

Account Hacks And Hijacks

Social Media Account Hijack: The act of infiltrating a branded social media account by an attacker for the purpose of stealing customer information, distributing malware, embarrassing the brand, or other malicious activities.

When a hijack incident occurs, the result for the social media team is panic and possibly significant recovery costs. Companies must take such steps as ramping up their PR efforts to regain control of their brand image, changing passwords, de-provisioning users and deprovisioning applications. For a Fortune 100 firm, regaining control might mean getting hundreds or even thousands of authorized users back on track. Indeed, as Nexgate’s research indicates, a Fortune 100 company has an average of 320 accounts and 13 applications installed on those accounts – a significant amount of social infrastructure to recover. All of this is not to mention the significant loss in social media marketing investment. Instead of promoting brand message, valuable resources go towards promoting the words of the hacker.

An account hijack steals brand voice to embarrass the brand, distribute malware, and even manipulate stock prices. Large social media audiences and trust that consumers place in large brands make Fortune 100 firms and other well-known brands prime targets. Hackers have hijacked social media accounts of some of world’s biggest names, including President Obama, The Associated Press, Jeep, CBS, FIFA, Microsoft, and Burger King. And, as demonstrated by the chart of account hacks below, these are just a few of many high profile hijacks.

Key Indicators of a Hack

While Facebook, YouTube, Twitter, and other social networks have tools in place to help prevent hijacking, hackers have learned to take advantage of poorly maintained passwords, authorized users, and compromised applications. While hacks vary in terms of scope, technique, and objective, we have identified general patterns to determine if a hijack may have occurred.

  • Burst in activity –When a hacker has taken over an account, they often publish a large volume of unwanted content in a short period
  • Abnormal posting patterns – For example, a posting frequency that is too regular (i.e. every two seconds, etc. indicates that a bot is making automated posts. Similarly, a dramatic the day and time of postings indicates a hijack. For example, if posts are usually made on weekdays from 9 AM to 5 AM, a post at 2 AM on a Sunday morning suggests occurred hijack.
  • Link type change – A noticeable change in the type of links posted or the type of link shortener used is a hijack indicator.

Please reference Nexgate’s How-To Guide to Stopping Account Hacks for more information on how to mitigate risk of a social media account hijack.

A New Communication Infrastructure Requires a New Approach

As outlined in the first installment in this series, widespread adoption of enterprise social media has resulted in a new social communication infrastructure that is forcing companies to be responsible for managing and protecting, just as they currently are for email and web communication infrastructures. This infrastructure has become just as critical as Web and email, but bears important distinctions.

The first distinction is that the inherently open nature of social cannot be managed and protected using the tools provided by the social media platforms alone. These platforms were designed for consumers, and not for business use. The security risks originating from social are not merely a function of the individual social platforms, which do an excellent job mitigating security risk for their users to the best of their ability. In fact, very few issues arise from security flaws in the platforms themselves.

Additionally, unlike web and email, social media infrastructure exists within of each social network platform in the cloud, and is therefore completely outside of the traditional security perimeter. Thus, it lacks the controls associated with a typical infrastructure. Malicious users have recognized how wide-open company social infrastructure is, and have begun flocking to social media to take advantage of companies and their audiences.

Moreover, as opposed to the web and email, social media grew out of marketing rather than IT. Thus, there is limited social media security expertise, Given that marketers are not security experts and security experts are not fluent in social media, a different approach to managing security threats that bridges these two disciplines is needed to address social media’s openness and location beyond the corporate perimeter.

We argue that this new kind of infrastructure has ushered in the need for a unique kind of security. Social media’s inherently open nature, location beyond the IT perimeter, and history that requires bridging traditional corporate roles necessitate a different approach than used previously. With the right strategy, even the largest corporations with the most expansive social infrastructure can minimize risk. We provide the steps necessary to do so while maximizing ROI on social media communications below.

Recommendations

Managing external threats to enterprise social infrastructure begins with understanding a brand’s social footprint. After all, there is no way to monitor and secure accounts without knowledge of their existence. Nowhere is this more important than Fortune 100 companies, where an average of 320 branded social media accounts exist per company.

1. Map Social Footprint

The sea of accounts that populate social media networks makes discovering social media footprint essentially unmanageable when done manually. The countless hours and resources that would go into manually searching for accounts makes it cost prohibitive and impractical, not to mention inherently prone to human error. Automated technology provides an efficient and cost-effective way to accurately determine a brand’s social media footprint.

Being aware of their social footprint allows organizations to create an inventory of accounts – both existing and newly created – and ensure only authorized access to those accounts. This limits the number of direct users to a particular account, thereby decreasing the number of possible targets for spear phishing attacks and account hijacks.

2. Identify Unauthorized Accounts

Additionally, if brands understand their social footprint, they can more easily identify unauthorized accounts. As with discovering social media account footprint, the volume of social media accounts to analyze make manual discovery of unauthorized accounts impractical. Automated tools provide the best solution for finding these accounts, which can then be removed using the particular social platform’s reporting processes. For more information on how best to deal with unauthorized accounts, please see Nexgate’s How-To Guide to Discovering and Reporting Unauthorized Accounts.

3. Monitor Accounts for Malicious and Inappropriate Content and Respond Appropriately

Automated technology can persistently monitor accounts for any suspicious changes, including the appearance of unwanted, malicious, and inappropriate content. While the high volume of ever-changing content makes manual content moderation inefficient, expensive, and simply not scalable, automated technology can be used to comb through even the most active of accounts for any sign of account tampering, hijacks, or abuse.

In the event that a security breech does occur, brands should take swift, immediate action to mitigate damage, remove the unwanted inappropriate or malicious content from their pages, and minimize damage. Again the best way to achieve this is through automated technology, which will act immediately after the incident has occurred to minimize harm and begin regaining control of the account. For more information on how to prevent and respond to account hacks, please see Nexgate’s How-To Guide to Stopping a Social Media Account Hacks.

4. Establish Organizational Roles and Responsibilities

Successful social media threat response plans start at the corporate structural level. Establish definitive organizational roles and responsibilities for identifying, and responding to social media threats. Because social media stretches across traditional corporate divisions, it requires coordination that can only be achieved through clear definition of roles and responsibilities. Please reference Nexgate’s Mapping Organizational Roles and Responsibilities for Social Media to see the full framework for assigning roles and responsibilities within corporate structures to manage social risk.

5. Develop a Social Media Acceptable Content Use Policy (AUP)

Creating an AUP clearly defines acceptable and unacceptable content in corporate social media settings. An AUP communicates social content policy to both an organization’s internal and external community, allowing for the quick removal of undesirable content such as malware links, hate speech, bullying, or any other policy violation. This gives brands the power to enforce community rules, create a safer, more respectful environment and promote positive engagement.

Conclusion

The results of Nexgate’s analysis of the Fortune 100’s social media infrastructure demonstrate unique threats impacting enterprise social media accounts. Unauthorized accounts, content-based threats, and account hijacking are all risks that must be addressed. Given the scale and complexity of the infrastructure, manual review of all social media content is impractical, Automated discovery, monitoring, and remediation technology more effectively find unauthorized accounts, remove malicious content, and detect account hacks. The external threats covered in this report represent one segment of the factors that make up social risk – internal compliance risk, are also important to recognize and address. The final installment of our study of the Fortune 100’s social media is focused on these threats that originate from within organizations themselves.

Want more like this?

Want more like this?

Insight delivered to your inbox

Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

side image splash

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy