Blog post

Fear, Uncertainty, Doubt and TikTok

By Andrew Walls | January 25, 2023 | 2 Comments

securityCorporate Ethics and IntegritySecurity and Risk Management LeadersSecurity of Applications and DataSecurity of the Cloud

Invoking security when security is not the problem

If you are in or near the US, you have most likely heard the latest brouhaha concerning security threats presented by a social media app. Today’s target is TikTok; an app and platform for creating, sharing and commenting upon short videos. TikTok is a subsidiary of ByteDance, an organization based in the Peoples’ Republic of China. Both companies are incorporated in the Cayman Islands.

In November of 2022, Christopher Wray (FBI Director) publicly shared FBI concerns that TikTok constitutes a threat to national security.  In response, multiple Federal and State governments banned the app on devices they own or manage and pressure has been brought to bear on other organizations to follow suit.

The problem with all of this is that there is no evidence to substantiate claims that TikTok is dangerous. Fundamentally, TikTok is considered dangerous simply because it comes from China. The app collects and manages data in ways common to most social media apps and uses proprietary algorithms to recommend videos to users. Many other social media platforms do similar things and many of these platforms have violated regulations, shared data inappropriately and failed to protect their users. The crucial difference is that most of these other platforms are owned by US corporations. The concern that a foreign government is spying on social media conversations is nothing new to people outside of the US. The US federal government, via legislation such as the Patriot Act and more covert arrangements, has monitored internet traffic for many years. Users based outside of the US assume that social discourse on US-based social media platforms is subject to US surveillance in some form.

I am not arguing here whether TikTok is a safe platform or not.  We simply do not know. Although the relative security of TikTok is amenable to analysis (see this study out of the Georgia Institute of Technology for one of the few solid security analyses of TikTok), the fervor with which governments are banning TikTok has nothing to do with theoretical or demonstrable security flaws within the app, the platform or the ByteDance/Douyin/TikTok corporate structure.

This is about politics, propaganda, marketing and public image management.

Cybersecurity leaders are being asked by senior leaders whether their enterprise should ban TikTok, but, this is not a security issue or decision! 

In the absence of solid evidence to the contrary, TikTok should be regarded as being as safe, or as dangerous, as every other social media app. No one is in a position to accurately assess the security risks of using online information distribution and social media platforms. You cannot read online news or communicate with friends via the internet without being tracked in some way by vendors and governments, which means the concerns expressed about TikTok should be viewed as shared by all social media platforms, including those based in the US or ‘friendly’ countries. 

Security leaders need to help their senior management define the non-security factors that should drive decisions about TikTok (and the next target of national FUD). for example:

  • Will we gain benefit from a ban by being seen to support Government (local and federal) moves to ban?
  • Will we suffer harm from not banning TikTok (e,g,: lost government contracts, removal of public funding, public attack in media)?
  • Will a ban alienate employees, clients or trading partners who are from or based in China?
  • Will participation in security theater dilute the credibility of our own security program and recommendations?
  • Are we using TikTok in our operations and need to find and fund a replacement platform and migration process? 
  • Should we ban other services and products sourced from China [substitute here the name of whichever country(ies) is currently positioned locally as an adversary]?

These are all important factors in making decisions about supporting an app, but none of these criteria are about the security of that app. When FUD is promulgated by people in authority, security leaders need to provide clarity to senior management of the real and substantive issues driving the latest round of security theater. Ultimately, invoking security theater to obfuscate or sugar coat political or marketing activities damages the credibility of the security industry – and your security team – and degrades employee support for security principles across the enterprise.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed


  • Thierry says:

    TikTok has become increasingly popular among kids and teenagers, but it’s important to be aware of the potential dangers that come with using the platform. For one, the app’s algorithm can expose children to inappropriate content. Additionally, TikTok has a feature that allows anyone to message users, which can put children at risk of online predators and cyberbullying. Not to forget that, TikTok has been criticized for its handling of user data, which raises concerns about privacy and security. Parents should be aware of these risks and take steps to monitor their children’s use of the app, such as setting up parental controls and discussing online safety with their kids or even blocking the access.

  • Komal says:

    Informative content