Blog post

Worst Network Practices (Part 2, Friday the 13th Edition)

By Andrew Lerner | May 13, 2016 | 2 Comments

NetworkingCulture

What better time to talk about worst network practices than Friday the 13th? Back in December, we published research (and a blog) on the ten Worst Network Practices, which include:

  1. Risk Aversion Stifles Innovation (aka because that’s the way we’ve always done it) – Note this is the polar opposite of “Shiny New Object Syndrome
  2. Manual Network Changes (But the network is special)
  3. Limited Collaboration (Lets keep this under the radar)
  4. Technical Debt (Incrementalism)
  5. Outdated WAN architecture (MPLS or bust)
  6. Limited Network Visibility (We’re still trying to figure it out)
  7. Failure to Survey the WLAN (3 Bars is good enough)
  8. Taking Questionable Advice (But the VAR is my partner)
  9. Vendor Lockin (But this provides enhanced capability)
  10. WAN Waste (But the Carrier is my partner)

For each “worst practice”, we provide a definition and real-world examples, identify their impact, and provide specific guidance to avoid them. Here’s an example (a snippet from the published research):

Manual Network Changes, AKA: …but the network is special.

We observe limited network automation and change control maturity within many enterprise network teams. This is confirmed in research surveys that indicate only 11% of organizations are fully leveraging network automation. Configuration and change management of networking gear remains primarily a labor-intensive, manual process that involves remote access (for example, via Telnet or Secure Shell [SSH]) to individual network devices and typing commands into vendor-specific, command-line interfaces, or homegrown scripts. These processes are ripe for human error and, not surprisingly, we find that human error is a leading cause of network outages within enterprises. Indicators that this is an area requiring attention include:

  • A majority of network changes are CLI-driven
  • Lack of or poorly implemented network automation tools
  • Outages occurring due to misconfigured network devices
  • Inability to quickly recover or roll back from network outages

 Action : Three key steps will help reduce outages and operational expense associated with manual changes. They include:

  1. Establishing standard network device configuration policies to reduce complexity and enable greater degree of change automation. This will require network teams to participate in companywide change management processes and will require integration with configuration management tools used by other technologies, such as servers and storage.
  2. Investing in network automation (NA) tools to monitor and control device configurations (including rollback), perform postchange service validation, and enable the enforcement of compliance policies.
  3. Encouraging network teams to hunt for manual processes that can be automated, and reward them when they convert them.

Regards, Andrew

PS – Network vendors have their own set of worst practices, including vendorspeak and saying things like “our difference is the architecture

 

Leave a Comment

2 Comments

  • Nitin Sharma says:

    While I totally agree that standardization and automated monitoring tools will have a positive impact, I have not been able to list out the manual processes that can be automated. Is it that I’ve reached the threshold of automation or is it the threshold of my knowledge?

    • Andrew Lerner says:

      Creating VLANs, Adding Ports, Turning up new branch infrastructure (routers/switches) are common starting points. Incenting staff to search for things to automate is another way to “find” things.