Welcome to the 2020s: What Does It Mean for Your Data Center Network?

We’ve just kicked off a new decade and while it might not be as spectacular as a new millennium with all of the Y2K suspense from 20 years ago, a decade is still a significant period of time where technology can bring major changes to how we live, work and play. In particular, data center networks are now facing a major transition in this edge-to-cloud world we live in.

Data center evolution
Look back to the dotcom bubble and the evolution of the modern enterprise data center. We saw significant growth in data centers during the 1990s, driven by the need to power workloads across the World Wide Web – ahem – I mean the Internet.

Following the mass proliferation of data centers in the 1990s and 2000s, enterprises began their quest to consolidate those data centers and continue to do so today. In parallel, we saw the launch of AWS and the public cloud, which brought the hyperscale data center and another option for hosting enterprise workloads.

The 2020s will be all about the edge
As we enter the 2020s, it’s becoming apparent that another data center evolution is under way.

After two decades marked by centralization of compute and infrastructure, the pendulum is swinging back toward the edge. Digital transformation and the need to harness data from connected devices to create real-time, connected experiences at the edge is driving this paradigm shift.

According to Gartner, today 90% of data is created and processed inside centralized data centers or the cloud. But by 2025, about 75% of data will need to be processed, analyzed, and acted upon at the edge. With this swing, you should expect some changes to your data center.

First, expect traditional data centers to continue to shrink, due to higher density from hyperconvergence and also because workloads continue to move to the cloud.

Secondly, as more workloads are placed at “the centers of data” to optimize performance and costs, expect the emergence of “edge” data centers. Enterprise-owned data centers will likely consist of two types:

  1. A blending of traditional data centers and campus environments.
  2. Mini data centers with IoT-enabled environments, such as large manufacturing centers.

Last, as we continue to see more DevOps and agile practices from application teams, we will see more pressure on network teams to optimize around workload-driven operations.

Data center evolution in the edge to cloud era

Three networking requirements for the edge-to-cloud era
Change is imminent for the network, too, as you seek to balance the new requirements of edge data centers with growing use of cloud and your remaining on-prem footprint.

While you are likely already on your journey toward this new era of data center networking, here are three top considerations to keep in mind. In fact, these three requirements are applicable for any type of data center including private cloud, co-location, and edge.

1. Simplification through automation
Application teams continue to adopt DevOps and other agile methodologies to accelerate software development. To better support these teams and the business, expect networking operations to become far more automated and simpler than they are today.

What you’ll need are solutions that align with current and future operating models and existing investments. Look for turnkey automation to simplify common, yet time-consuming configuration tasks. For teams with more mature DevOps practices, extending common automation platforms like Ansible to network-related workflows will be a must. Finally, as we continue to see more DevOps and agile practices in application teams within IT, expect those practices to influence how the other organizations within IT function.

2. Actionable insights via analytics
There is perhaps no bigger resource drain on network operations than trying to troubleshoot issues. Having better network visibility is imperative to shortening MTTR, improving IT service delivery, and keeping short-staffed teams focused on more strategic matters.

Gaining network-wide telemetry, captured and processed natively on each node, will be a huge leap forward. These analytics with built in remediation will be instrumental in providing better network assurance and helping troubleshooters proactively identify or even preempt user- or business-impacting issues.

Predictive analytics can also help anticipate issues before they arise and also help with capacity planning efforts, especially during periods of high usage, ensuring the network is right-sized to deliver on user experience demands.

3. Always-on availability
Five 9s of availability is by no means a new data center requirement. However, you can argue that the need for highly resilient networks will only intensify in a digital era where even a minor hiccup has huge ramifications for the business.

Automating day-to-day operations will help improve uptime by avoiding human error. But what networking teams also need is a far simpler, more reliable way of ensuring high availability, while also delivering on the promise of non-disruptive upgrades. Having a cloud-native, microservices-based operating system will ensure added resiliency at the software level, and being able to orchestrate live software upgrades to eliminate maintenance windows will also be critical.

HPE + Aruba = Data Center Nirvana
Like previous decades, we’re not seeing a decline of traditional data centers, but rather an evolution. In this case, the emergence of edge data centers to power IoT and other digital initiatives at distributed business locations.

As your organization begins this latest transition, you won’t be alone. HPE and Aruba are leading the way in providing technology solutions for the data center with our CX portfolio.

Aruba CX Networking for the Evolving Data Center

Explore our data center networking solutions to see how we can help transform your data center once more.


Empresa de cibersegurança disponibiliza todos os seus cursos gratuitamente

No total, são 24 cursos com certificação, os quais abrangem de conhecimentos básicos a níveis avançados na área de segurança da informação

A Fortinet, empresa de cibersegurança, disponibilizou gratuitamente todos os seus treinamentos, antes abertos apenas a parceiros, a quem quiser expandir seus conhecimentos na área. Em um cenário que torna os usuários mais suscetíveis a ameaças cibernéticas, o objetivo é diminuir a exposição de organizações e pessoas físicas.

No total, são 24 cursos que abrangem de conhecimentos básicos a níveis avançados de cibersegurança. Os módulos, em sua maioria provenientes do currículo oficial do instituto Network Security Expert (NSE) – programa de treinamento que fornece validação a profissionais técnicos em segurança de rede – serão gratuitos até o fim de 2020.

“O momento forçou muitas organizações a enfrentarem mudanças rápidas e novos riscos à medida que adotaram modelos de trabalho remotos. As equipes de TI estão sob pressão para proteger efetivamente suas empresas em ambientes altamente dinâmicos e que exigem extensas habilidades de segurança. Como uma empresa de tecnologia e também de treinamento, disponibilizamos todo o nosso catálogo de cursos avançados gratuitamente online e em ritmo personalizado, para que qualquer pessoa possa expandir seus conhecimentos e habilidades”, explicou John Maddison, vice-presidente executivo de produtos e diretor de marketing da Fortinet.


Os 24 cursos disponibilizados pela empresa podem ser cursados de acordo com a agenda individual. Imagem: Pixabay

Desde que passou a oferecer gratuitamente os módulos básicos NSE 1 e 2 e o curso avançado FortiGate Essentials no início de abril, a empresa vem recebendo, globalmente, cerca de uma nova inscrição a cada 30 segundos. Já são mais de 48 mil pessoas inscritas nos módulos básicos e mais de 14 mil no FortiGate Essentials.

Vale lembrar que os cursos podem ser realizados conforme a disponibilidade de cada pessoa e o único pré-requisito para acompanhá-los é a fluência em inglês. Os vídeos já disponíveis para visualização serão complementados com transmissões ao vivo, programadas regularmente com instrutores certificados pela Fortinet. Ao fim de cada módulo, os participantes receberão certificados.

Para aqueles que se interessarem, os cursos gratuitos poderão ser acessados via site da Fortinet


Back to square one: The Capital One breach proved we must rethink cloud security

By all accounts, Capital One defended its customers’ data with the imposing array of cyber security tools that you’d expect from one of the largest banks in the United States. And yet a lone hacker managed to bypass those tools and obtain the sensitive personal information of more than one hundred million people, a breach that will likely cost the bank well over a hundred million dollars when all is said and done.

The hacker — a former employee of Amazon Web Services, which hosted the compromised database — gained access to the sensitive data by exploiting a misconfiguration in one of Capital One’s application firewalls. Such misconfigurations along the customer’s interface with the cloud have become a favorite target for cyber-criminals. In fact, according to Gartner, 99% of cloud security failures will be the customer’s responsibility through 2023.

The fundamental flaw

At a time when virtually all enterprises have adopted cloud infrastructures that expand and evolve as needed, configuring firewalls and other endpoint protections to remain properly positioned can be a daunting challenge. These conventional security tools are designed to defend the digital perimeter — an antiquated strategy given today’s borderless networks. Moreover, modern developers now have the ability to spin up a cloud instance in minutes, often without having to consult their firm’s security team. As a consequence, the overwhelming majority of organizations lack visibility over their own cloud environments.

While nearly half of organizations don’t even bother looking for malware on the cloud, Capital One had a relatively mature cloud security posture — at least by traditional standards. It is therefore all the more alarming that the bank didn’t become aware of the breach until more than three months after the fact, when it received a tip from an outsider who’d stumbled upon the stolen data. That a major financial institution was blind to this level of compromise further demonstrates the urgency of rethinking cloud security.

Of course, there is no silver bullet when it comes to cyber defense — and that goes double for the cloud. Motivated attackers will inevitably find a way inside the nebulous perimeters of IaaS and SaaS environments, whether via insider knowledge, critical misconfigurations, personalized phishing emails, or mechanisms that have yet to be seen. The path forward, then, is to use artificial intelligence to understand how users behave within those perimeter walls, an understanding that shines a light on the subtle behavioral shifts indicative of a threat.

Demystifying the cloud

The latest cyber AI security tools aim to do just that: observing traffic activity on AWS and other CSPs to learn an evolving sense of ‘self’ for each unique cloud environment they protect. Indeed, this ability to distinguish between normal and abnormal behavior proved decisive when a financial services company faced an attack strikingly similar to the Capital One breach. The firm was hosting a number of critical servers on virtual machines — some of which were meant to be public-facing, some of which were not. When configuring its native cloud controls, however, the firm mistakenly left one of its private servers exposed to the internet, rather than isolated behind a firewall.

The exposed server was eventually discovered and targeted by cyber-criminals who were scanning the internet via Shodan, a search engine that locates internet-connected assets. Within seconds, Darktrace’s AI detected that the device was receiving an unusual amount of incoming connection attempts from a wide range of rare external sources and alerted the security team — which had been unaware of the misconfiguration. This “unusual” volume of “rare” connections might well have been normal for a different company or a different server, but the AI’s knowledge of ‘self’ revealed the activity to be anomalous in this exact case.

By employing such AI systems, we can gain the necessary knowledge of complex cloud environments to catch threats in their nascent stages — before they escalate into crises. Ultimately, the cloud promises to unlock new heights of efficiency and novel forms of collaboration, so long as we’re willing to adopt equally innovative security tools. Because while there may never be a silver bullet for safeguarding cloud services, AI does offer hope for a silver lining.


1,000+ Customers and Counting with the Aruba CX Switching Portfolio

What do 1,000+ companies across a number of industries from all over the world have in common? Well, for one, they’ve all deployed Aruba CX Switching to power their network cores.

For a little perspective, we launched the AOS-CX operating system and Aruba CX 8400 in June 2017. In those two years, we’ve been helping more and more companies modernize their networks with a next-gen switching platform.

The value of the Aruba CX is evident across the myriad industries where our switching platforms, software, and management tools are leveraged. Let me share a few examples of why customers are deploying this next-gen switching platform.

Full Programmability to Automate and Simplify Management
One reason so many companies embrace the Aruba CX portfolio is because it greatly enhances IT agility and efficiency. Driving these benefits is AOS-CX, the most modern network operating system in the industry.

We built AOS-CX from the ground up and it is based on cloud-native principles. It offers advanced levels of programmability via full RESTful API coverage. This allows IT staff to program their network to seamlessly communicate with other network services, devices, and apps—streamlining workflows and automating many common tasks to enhance the network operator experience and greatly simplify management.

Most companies find they can better allocate their scarce IT staffs after deploying Aruba CX switches by automating what were once resource-dependent processes. For example, Mid-South Energy uses the REST APIs in AOS-CX to “infuse [their] network with self-healing properties” and thus obtain “smoother IT operations.”

Watch this short video to see the innards of Aruba CX and the software innovations that have led to broad adoption of our next-gen switching portfolio.

Error-Free Deployments with NetEdit
Network teams are often overtaxed by frequent adds, changes, and moves required to support today’s digital workplace. Such changes are highly manual, requiring lines of CLI commands implemented on a device-by-device basis.

Aruba alleviates the complexity with NetEdit, a configuration orchestration tool for Aruba CX switches. NetEdit arms IT teams with the power to smoothly coordinate end-to-end service roll outs, automate rapid network-wide changes, and ensure policy conformance after network updates. This intelligent assistance and continuous validation assure that network-wide configuration changes are consistent, compliant and error-free.

For one Aruba customer, a grocery chain in Europe, this level of automation drives significant improvements in operational efficiency. This customer had a café printer that needed a specific port configuration, but pushing out an update required searching each store, locating the printer, and making a hard change on each switch. With Aruba, pushing out an update is done seamlessly from a central management console, across all devices.

Resilient by Design for Unrivaled Availability
Today, network downtime equates to lost productivity and revenue. Even a millisecond of poor network performance can cost a high-frequency trader millions of dollars.

With AOS-CX, such hiccups aren’t a threat, as its resilient design ensures networks are always on. This is made possible through a robust, yet simple solution for high availability, known as Aruba Virtual Switching Extension (VSX). Powered by AOS-CX, VSX also enables live upgrades at the aggregation and core layers, so business is never disrupted, even during necessary maintenance windows.

Built-In Analytics Accelerate Troubleshooting andRoot Cause Analysis
If performance issues do arise, operators need actionable insights to quickly pinpoint and address the root cause. Unfortunately, traditional methods of identifying problems—such as using probes and show commands—are too reactive and slow. Moreover, the use of third-party monitoring tools creates additional gaps in visibility, as they often sample data and offer little to no correlation to root cause.

To address these visibility issues, the Aruba Network Analytics Engine (NAE) is built into every CX switch, capturing important data to help operators optimize network performance. NAE automatically interrogates and analyzes any network event that can impact performance or security. By capturing telemetry natively on the switch, NAE provides real-time, network-wide insights so operators can quickly detect, prioritize, and fix problems. This helps reduce mean time to resolution (MTTR), minimizing business disruptions as well as operational costs tied to troubleshooting issues.

Network operators can also use analytics from NAE to predict or even preempt problems. For example, Friesland College uses “the analytics and trending information available from [NAE to] make adjustments before a service experiences latency or capacity issues due to growth.”

We’ve applied our customer-first, customer-last mentality to the Aruba CX switching portfolio by fostering community-driven, opensource development. AOS-CX allows IT staff to build their own innovations into the platform on demand. Python scripts can be shared and consumed around the world via an official Aruba Solutions Exchange as well as GitHub. This allows IT staff from disparate companies to work together to solve common issues and then pick and choose the innovation to deploy in their own environment.

It’s Time to Make the Switch
The Aruba CX switching portfolio is delivering immense value to businesses worldwide. However, we’re not stopping here. Over the past several months, we’ve been busy building on these innovations to bring more automation, intelligence, and performance to your network.

Join us during our live launch event on Oct. 22 as we unveil new innovations. See how you can displace your legacy network with an architecture designed to propel you and your business forward into the future of IoT, cloud, and mobile.


Break Free from Legacy Network Constraints with Aruba CX Switching

Businesses can’t move forward with digital transformation using networks that are stuck in the past. Characterized by manual processes, fragmented operations, and a lack of visibility and control, these legacy networks present IT with a number of obstacles when trying to deliver on the expectations of modern users—be they customers, employees, partners, or citizens.

As organizations expand their adoption of cloud, mobile, and IoT, these network constraints will only become more pronounced. Legacy switching infrastructure, in particular, is overtaxing network operators, as they must grapple with disparate operating systems and even entirely different operating models at each layer of the network.

Aruba has broken these constraints. Today, we announced significant innovations to our Aruba CX Switching Portfolio that will equip network operators with the industry’s first, end-to-end platform that spans campus, branch, and data center networks.

Included in this release are revolutionary hardware platforms, new software innovations, and enhanced analytics and automation capabilities, all purpose-built for the network operator tasked with supporting today’s frenzied business environment.

Let’s take a closer look at each of these enhancements.

Unrivaled Scale and Flexibility with Aruba CX 6400 and CX 6300 Switch Series

Cloud, video, and collaboration apps were already pushing legacy switches to their limit. Now, the advent of IoT is flooding networks with even more traffic. That’s why we’ve introduced two new switching families to support these new demands, with plenty of capacity to accommodate tomorrow’s technologies.

The Aruba CX 6400 Switch Series is a family of modular switches that come in 5-slot and 10-slot chassis, with a non-blocking, distributed architecture capable of delivering 2.8 Tbps per slot. Scaling from 1G PoE access to 100G core, the 6400 switches are a true Swiss Army Knife for today’s network operators, supporting any use case or workload across the enterprise, from campus access to data center environments.

The Aruba CX 6300 Switch Series is a family of stackable switches ideal for network access, aggregation and core use cases. With support for up to 10-member VSF stacking and offering built-in wire speed 1/10/25/50 gigabit uplinks, the 6300 switches deliver unrivaled investment protection with the flexibility to support significant growth around emerging technologies such as IoT and Wi-Fi 6.

Delivering 140+ Rich Access-layer Features with AOS-CX 10.4

Organizations embarking on or broadening their IoT initiatives will get significant mileage out of AOS-CX 10.4. This fifth major release of our cloud-native operating system brings core-proven reliability to the access layer with 140+ rich software features.

Among these is new support for always-on PoE, as well as VxLAN with EVPN in both campus and data center networks—important capabilities for IoT deployments. Always-on PoE ensures Wi-Fi access points and critical IoT devices, such as healthcare sensors, will never lose power, even during network upgrades. Meanwhile, EVPN over VxLAN delivers a simple, yet highly scalable way to segment the ever-increasing diversification of IoT-enabled workloads and devices.

AOS-CX 10.4 also extends Aruba Dynamic Segmentation to campus access, further simplifying an operator’s task of providing unified policy and secure connectivity across wired and wireless networks for every user and IoT device.

Distributed Intelligence and Automation with NetEdit 2.0 and Network Analytics Engine

One of the guiding principles of the Aruba CX Switching Portfolio is to simplify and enhance the network operator experience. That’s why we’re excited to introduce significant enhancements to Aruba NetEdit, our intelligent configuration tool that automates many aspects of deploying and managing CX switches.

Central to these enhancements is the integration of NetEdit and the Aruba Network Analytics Engine (NAE), an on-box application enabled by AOS-CX that captures rich analytics on every CX switch to automate many aspects of network monitoring and troubleshooting.

The integration between NetEdit and NAE reduces the burden on network operators when investigating user- and network-impacting issues. And now that CX has been extended to the access layer of campus networks, network operators can benefit from distributed analytics and real-time, network-wide visibility.

NetEdit 2.0 also provides a topology view for fast insights into network health and conformance with green/yellow/red statuses for every deployed CX switch. Dynamic, tailored views of the network are then triggered based on the layers an operator selects, offering more granular visibility into Aruba CX device status and health. This includes detailed diagnostics on what may be contributing to a performance issue—be it an application, client, or network service such as routing or segmentation.

Another new capability of NetEdit 2.0 comes in the form of GUI-driven wizards that enable operators to deploy common, yet complex configurations using only a few prompt-driven commands and clicks. This feature brings even more efficiency to short change windows and includes pre-built solutions for configs like establishing VxLAN tunnels between switches.

Make the Switch to Aruba CX

Now’s the time to switch to a next-gen network. Learn more about these exciting innovations to the Aruba CX Switching Portfolio, and get ready to displace your legacy network with a single, end-to-end architecture that will propel your business into the future of IoT, cloud, and mobile.


63% of SMBs believe cloud storage providers should do more to protect their data

Almost two-thirds of small- to medium-sized businesses (SMBs) believe that more work needs to be done to protect their data in the cloud, according new research from cybersecurity firm IS Decisions.

Since moving to the cloud for storage, 29% of SMBs have suffered a breach of files or folders, according to the same research. Almost a third (31%) said that since moving to the platform, detecting unauthorised access has become much more difficult, and 22% admitted that hackers have gained external access using an employee’s login credentials.

The new report entitled “Under a cloud of suspicion,” is based on research conducted with 300 heads of security within small- to medium-sized businesses across the UK, US and France who are using Dropbox for Business, Google Drive, Box and Microsoft OneDrive. It examines the current perceptions of cloud storage security and how these perceptions are driving data-related security decisions.

Also, according to the report, just 52% of SMBs actively monitor sensitive files for unauthorised access, while the rest only do so either on an ad hoc basis or after a breach has occurred — or in some cases, not at all. Furthermore, while many SMBs are managing a hybrid-approach whereby they use a combination of on-premises and cloud storage, 56% of those surveyed say that it’s difficult to manage the security of data living in these hybrid infrastructures.

Commenting on the research, IS Decisions founder and CEO François Amigorena said: “There’s no doubt that the cloud has considerably enhanced the way that SMBs do business. But businesses who have moved to the cloud for storage are finding it harder to detect unauthorised access to company files and folders. The ease of sharing data among teams and simple integrations their storage can have with other cloud applications significantly increases the prospect of unauthorised access. Without the right access controls in place, the risk of employee credentials being misused or stolen makes detecting unauthorized access even harder”

“The last thing any business wants is to suffer a breach of data. Therefore, there needs to be a stronger and more efficient way to ensure that data in the cloud remains safe.”

To learn more about how SMBs perceive the security of the cloud storage and what they are doing to protect their data in the cloud, download “Under a cloud of suspicion”.


Force Remote Logoff after Idle Time

Resolving Optimization and Security Issues

Fast and fluid access to shared workstations is key to their utility, but managing this access can be a challenge. Whether its call center agents, hospital clinicians, reception desks or student computer labs, the problem is often the same. Users log in and never log off. They simply lock the workstation and walk away. Once you have multiple users just clicking ‘Switch User’ to login, there are enough random applications running in the background from all these idle users, to slow the system to a crawl.

Worse still, rather than locking, they just leave their session open. Forgotten sessions left open on shared workstations means user accounts are now at risk (high risk!) of being compromised.

Securing Shared Workstations

As an entry point to an organization’s information resources, it’s paramount to ensure that these shared workstations have the best security configuration.

With UserLock, IT administrators can set an automatic forced logoff, on all locked or open machines, after a certain idle time. This includes remote desktop sessions opened by the domain user. Access to data and resources is now better secured, resources freed up and time saved for IT from having to deal with various issues.

Video Transcription

In this video I’m going to show you how UserLock can be configured to allow you to automatically log off, lock machines, after an idle time.

So the first thing we need to do is make sure that on the target machines we have the screen saver option “on resume display logon screen”.

This can be enabled through a Microsoft GPO setting.

Back in the UserLock console we’re going to go to the properties of the agent distribution and here we’re going to check to “consider screen saver time as locked time”, that way in Userlock we can recover the events as a locked session.

windows server automatic logoff after inactivity

Click here to apply the new setting and this will be effective on the target machines at the next reboot or we can force it to be effective immediately by restarting the UserLock agent service on the target machine.

So once you’ve done that setting we need to now select the user accounts for which we’d like this setting to be enabled.

So we’re going to protect a new account, we can do this at the user, group or OU level. I’m going to do this for Active Directory group “everyone”.

I create my protected account and then by simply double-clicking on the protected account I have access to the properties and all of the settings and restrictions that I can set for this protected account.

So we’re going to go down to hour restrictions and here we have “Maximum locked time”. So once the screensaver has been enabled, we’re going to put in the amount of time that we will allow the computer to be locked before we force a logoff. So for example we can put 10 minutes.

maximum locked time logoff after inactivity

By default the end user is going to receive a logout notification, that’s by default one minute before the log off. If we’d like to allow a little bit more time – if they’d like to bypass the logoff – we can change that here. So we can apply these settings.

So that’s it that’s how we can configure a forced logoff of computers that are open after a certain amount of idle time to free up workstations and to reduce unnecessary use of resources on your network.


  • The ‘Session’ options are supported by interactive session types (workstation and terminal).
  • The logoff initiated by these options is forced. Any unsaved documents will be lost.

A UserLock Case Study

Read how the Architectural Technology Department of the NYCCT (New York City College of Technology) stopped forgotten Windows sessions to secure their network and free up resources.

Going further with UserLock

Direct from the console, UserLock can interact remotely with any session, at any time. This includes a forced logoff to several machines at once or blocking a user with a single click.

Other use cases that involve managing user logins can be found here.

A Free Fully Functional 30 Day UserLock Trial

Don’t take our word for it, Download now the fully functional free trial and see for yourself how easily UserLock can bring a new level of security to your Windows Server Network.

LimitLogin vs UserLock

This blog post reviews how LimitLogin and UserLock limit concurrent user logins in an Active Directory domain.

It will focus on the concurrent connection restriction feature provided by each solution and discuss how else they help an organization secure user access for Windows Active Directory environments.



LimitLogin is an unsupported tool that was released in 2005. It was written by a Microsoft Partner Technology specialist and an Application Development Consultant. The aim of LimitLogin was to add the ability to track and limit concurrent workstation and terminal user logins in an Active Directory domain.


UserLock is an enterprise software solution that controls, audits and monitors user access to an Active Directory network. UserLock permits, denies or limits access based on a range of criteria; for example, preventing concurrent logins via a single identity, limiting access to certain device types and limiting network access methods. UserLock also monitors all sessions in real time providing alerts and information to respond to suspicious events and a log of access information for audit and forensics.

UserLock is developed by IS Decisions, a Microsoft Partner company founded in 2000, that specializes in solutions to safeguard and secure Microsoft Windows and Active Directory infrastructure.

Download NOW a fully functional Free Trial of UserLock. 30-day full version with no user limits

LimitLogin vs UserLock. A comparison

Agent TechnologyLogon ScriptsWindows Service*
AD schema modificationYesNo
Web Server requirementYesNo
Supported Workstation OSWindows 2000 SP4, Windows XP SP1Windows XP, Windows Vista, Windows 7, Windows 8, Windows 8.1, Windows 10
Supported Server OSWindows Server 2003, Windows Server 2008Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016
64-bit OS supportNoYes
Client AgentYesYes
Integrated deployerNoYes
Workstation SessionsYesYes
Terminal SessionsYesYes
Wi-Fi SessionsNoYes
VPN SessionsNoYes
IIS SessionsNoYes
Logon and logoff events auditYesYes
Lock and unlock events auditNoYes
Limit by UserYesYes
Limit by GroupNoYes
Location restrictionsNoYes
Time connection restrictionsNoYes
Time Quota featuresNoYes
Customizable MessagesNoYes
E-mail notificationsNoYes
Pop-up notificationsNoYes
Database of session activitiesNoYes
Printable and customizable reportNo : Only CSV / XMLYes
Supported solutionNo, even by its provider MicrosoftYes, by its editor IS Decisions

Windows Service*: Except for old Windows XP and 2003 Server for which the micro agent technology is a GINA DLL.

Requirements and Specifications

LimitLogin is not compatible with Windows Server 2008R2, 2012 and 2012R2.

UserLock is certified for compliance and support with Windows Server 2016, 2012, 2012R2, 2008 and 2008R2.

LimitLogin doesn’t support 64-bit systems. UserLock does.

Similar to the status of the resource kit tools and/or the support tools, LimitLogin is not officially supported by Microsoft.

  • Windows Server Operating Systems
LimitLogin UserLock - Windows Server Operating Systems
  • Windows Workstation Operating Systems
LimitLogin UserLock - Windows Workstation Operating Systems

Limited Session Types with LimitLogin

LimitLogin capabilities are limited to monitoring only workstation and terminal sessions. UserLock on the other hand takes into consideration access from all session types (workstations, terminal, interactive, Internet Information Services and Wi-Fi/VPN). Learn more

  • Audited and Protected User Session Types
LimitLogin UserLock - Audited and Protected User Session Types

Architecture & Deployment

A summary comparing the architecture required to monitor and limit the number of workstation and terminal logins:

The architecture is built around 3 main elements:
– A Web service that handles the back-end processing on the server.
– An application directory partition that holds the login information.
– Login and logoff VBS scripts.
A client/server application:
– A UserLock Server on a Windows server.
– A Micro-agent on protected machine.
– Optionally a SQL Server
LimitLogin requires creating a new partition in Active Directory on a Windows Domain Controller.UserLock can be installed on any server member of the network.There is no requirement to use a Domain Controller Server.
LimitLogin performs an Active Directory Schema modification. This operation is irreversible and cannot be cancelled.It doesn’t perform any Active Directory modification.
LimitLogin requires logon and logoff scripts.The micro-agent can then be automatically deployed through the UserLock console or as a MSI package.
It requires a Web server set up to do delegated Kerberos Authentication for scripts communication and rules processing.Encrypted communication between the server and agents requires only Ping and Microsoft File and Printer sharing protocols.
Login sessions information is stored in files that are not encrypted.Sessions Activities are stored in a database that can be a SQL Express or Server Edition (A free database is provided).

Deploying LimitLogin

Microsoft LimitLogin was designed to help administrators to apply login limits on their network. It is however complex to implement and unsafe due to the Active Directory Schema modification it requires.

Bill Boswell (Microsoft Certified Professional Magazine) wrote this very meticulous and precise breakdown on how to deploy LimitLogin:

“LimitLogin requires a bit of effort to deploy. For one thing, it performs a Schema modification. For another, it creates a new partition in Active Directory. It also requires configuring a Web server with the .NET Framework and ASP.NET and setting it up to do delegated Kerberos authentication. Finally, it requires distributing client packages that support communicating with the Web server via SOAP (a lightweight protocol for exchanging structured information in a distributed environment). Whoa. Don’t stop reading. It’s complicated, but not impossible. Really.”

Deploying UserLock

UserLock installs in minutes on a standard Windows Server. The installation can be done on any server member of the domain. There is no requirement to use a Domain Controller server. Once installed, UserLock must deploy a micro agent onto each workstation that are members of the selected network zone. This can be done through the UserLock console which contains an agent deployer with manual or automatic modes. UserLock reads Active Directory information but doesn’t modify anything regarding accounts nor schema.

Download NOW a fully functional Free Trial of UserLock. 30-day full version with no user limits

The need to manage Concurrent connections

Most organizations that work in a Windows environment use Microsoft Active Directory to authenticate and control all users. However, Active Directory is by no means a full proof security solution. Yes it manages passwords and confirms that the user name matches the password, but it does not stop multiple users from logging on with the same password, at the same time.

This challenge of limiting concurrent logins in a Windows environment averts one of the most potentially dangerous situations for a Windows Active Directory network.

Preventing or limiting concurrent sessions:

  •  Stops users sharing their passwords. Users will think twice about sharing credentials, as they won’t be able to get on the system if someone else is logged in too
  •  Stops rogue users from using valid credentials at the same time as their legitimate owner
  •  Ensures access to critical assets is attributed to individual employees
  •  Is required for an information system to comply with major regulatory constraints, including NIST 800-53, SOX, PCI-DSS, HIPAA and the newly updated CJIS requirements.

Further Restrictions needed to Manage Network Access

The application LimitLogin allows an organization to manage only the number of user logins.

With UserLock, concurrent login control is just one part of a granular access control policy. UserLock sets and enforces login control based on multiple criteria in a matrix of access rules; that is set according to user, user group or organizational unit.

Control from where a protected account may logon. Restrict domain users to workstation or device, IP range, department, floor or building. Learn more

Control the hours and days when protected users can logon onto the network. Define working hours and/or maximum session time. Learn more


Microsoft LimitLogin was a free tool to help administrator in the past to apply login limits on their network. It was however complex to implement and unsafe due to the Active Directory Schema modification it required.

In today’s world, LimitLogin is unable to meet the critical needs of many organizations. Operating systems which have appeared during the past six years are not supported, only the number of user sessions can be controlled – no further restrictions by location or time, and it is limited to only workstation and terminal sessions.

Defining and enforcing a full User Access Policy to ensure the security of your network access and the protection of your data require the consideration of more context variables.

The number of simultaneous accesses is not sufficient. You need to know, analyze and control who, how, when, how many times and from where an access to an enterprise network is requested, whether this request is done on a machine, through the VPN, thanks a wireless connection or by a web application or an Intranet.

UserLock answers to these needs with an effective network access management tool that is very simple to manage and easy to use. A customized access policy can be set and enforced to permit or deny user logins. Concurrent sessions can be prevented and access restricted to specific workstations or devices, time, business hours and connection type (including Wi-Fi).

Download NOW a fully functional Free Trial of UserLock. 30-day full version with no user limits

Case Study: Bank of Cyprus reduces security risks from internal users with UserLock

Disclaimer: The comparison juxtaposes the features of IS Decisions UserLock and LimitLogin based on the publicly available information as of February 11, 2014.

Brasil sofreu 15 bilhões de tentativas de ataques cibernéticos no segundo trimestre de 2019

A Fortinet, líder global em soluções completas, integradas e automatizadas de segurança cibernética, anunciou os resultados de sua mais recente pesquisa sobre ameaças, revelando que o Brasil sofreu 15 bilhões de tentativas de ataque cibernético em apenas três meses, entre março e junho de 2019. O serviço de inteligência contra ameaças da Fortinet, FortiGuard, detectou a prevalência de ataques antigos como os usados no ransonware Wannacry em 2017 e aqueles que violaram seriamente os bancos no Chile e no México em 2018. A eficácia desse tipo de ataque indica a presença ainda existente de sistemas não corrigidos ou atualizados em empresas brasileiras e a necessidade crítica de maior investimento em tecnologias de segurança cibernética.

De acordo com Frederico Tostes, Country Manager da Fortinet no Brasil: “A segurança cibernética passou de um elemento complementar para uma necessidade crítica para todas as empresas em seu processo de transformação digital. A questão não é mais “o que fazemos se sofrermos um ataque cibernético?”, mas seria “o que fazemos quando sofremos um ataque cibernético?”. Atualmente, a segurança cibernética é uma questão global e o Brasil também ocupa um lugar importante no mundo como um alvo para os criminosos cibernéticos. Vemos ameaças que aumentam em um ritmo alarmante, tanto em quantidade quanto em sofisticação”.

Os resultados da pesquisa FortiGuard foram apresentados no âmbito do Fortinet CiberSecurity Summit (FSC19), evento que reuniu 1.000 especialistas em segurança de rede de diversas áreas de São Paulo para discutir os principais perigos digitais chaves da atualidade, o panorama de riscos nos próximos anos e como profissionais e empresas podem se preparar para esses novos tipos de ataques. Os resultados mais proeminentes incluem:

• Antigas e conhecidas ameaças permanecem muito ativas no Brasil

o DoublePulsar, o troiano usado para distribuir malware em ataques reconhecidos como o ransomware Wannacry em 2017 e ataques a bancos no Chile e no México no ano passado, esteve entre os três mais detectados no Brasil no segundo trimestre de 2019.

• Grande número de tentativas de exploit de aplicativos para negação de serviços

o Cerca de 73% das tentativas de intrusão em redes detectadas no Brasil exploraram uma vulnerabilidade que permite ativar um comando para gerar ataques por negação de serviços em servidores NTP (Network Time Protocol é um protocolo da Internet para sincronizar os relógios de sistemas de computadores através de roteamento de pacotes em redes).

• O malware que afeta o Windows e é usado para “criptomineração”

o Cerca de 33% do malware detectado no Brasil foi um “verme” com características de troiano que afeta computadores com o sistema operacional Windows. Pode ser considerado um ataque sério se você não tiver um antivírus atualizado.

o Além disso, o malware CoinHive, usado para “criptomineração” de Bitcoin, foi o segundo mais detectado no Brasil durante o segundo trimestre do ano.

• Dispositivos de IoT continuam sob a ameaça do botnet Mirai

o Desde seu lançamento em 2016, o botnet Mirai, que ataca dispositivos IoT continua registrando uma explosão de variantes e atividades. Classificados em segundo lugar no Brasil, os criminosos cibernéticos continuam a usar o Mirai como uma oportunidade para assumir o controle desses dispositivos.

“A segurança cibernética é uma questão com a qual temos que lidar como prioridade. É necessário repensar a segurança de forma abrangente para estarmos melhor preparados para prevenir, detectar e responder automaticamente às ameaças”, acrescentou Tostes. “Aumentar a conscientização sobre os riscos, promover o treinamento abrangente de jovens profissionais, ajudar na adaptação a novas regulamentações, como a Lei Geral de Proteção de Dados, e continuar a focar em assessorar o mercado com nossos especialistas e através de nossa rede local de parceiros de canais, é prioritário para a Fortinet no Brasil”.

A Fortinet é a empresa de segurança cibernética número um no Brasil, com o maior número de dispositivos de segurança entregues no país, contando com 50,12% de participação nos mercados de firewall, gerenciamento unificado de ameaças, detecção e prevenção de intrusões e redes privadas virtuais (dados do IDC).

A arquitetura de segurança avançada, juntamente com uma equipe de especialistas, permite que a Fortinet ajude as organizações brasileiras a acelerar sua transformação digital sem comprometer a segurança. A capacidade do Fortinet Security Fabric de proteger todos os pontos da rede oferece aos clientes a proteção integral contra ameaças que é necessária para enfrentar os desafios da segurança cibernética em constante evolução.


Comparing 5G to Wi-Fi 6 from a Security Perspective

Enterprise-grade Wi-Fi systems have proven to be secure for thousands of demanding customers across virtually all industries. With the recent hype around 5G and service providers promoting 5G as an alternative to Wi-Fi in the enterprise, it pays to understand how 5G security stacks up against Wi-Fi security.

Understanding 5G and Wi-Fi Security

Cellular security has improved with each generation. The security of first-generation analog cellular systems, based on the AMPS (Advanced Mobile Phone System) standard, was essentially non-existent. These calls were unencrypted and could be intercepted with basic scanners. The security of currently deployed LTE networks is far better. LTE uses strengthened encryption and an authentication algorithm (“AKA”) that shares a key between the client and the receiving base station. But while LTE security is solid, it isn’t perfect.

According to researchers at Purdue and the University of Iowa, LTE is vulnerable to some types of cyberattacks, including data interception and device tracking. The Associated Press last year reported the US Department of Homeland Security (DHS) has acknowledged the existence in Washington, DC of cell site simulators, called “Stingrays,” that could track cellular devices, intercept calls, and potentially even plant malware. 5G security improves upon LTE security incrementally, with identical encryption, slightly hardened authentication, and better key management. But overall, 5G security is largely comparable to LTE security.

Just as cellular security has improved, Wi-Fi security has evolved with each generation. Early Wi-Fi networks, beginning in the late 90s, used weak encryption and authentication, called “WEP”. The subsequent WPA and WPA2 standards feature improved encryption. Authentication with WPA2 can be either enterprise-grade 802.1X, or weaker PSK (pre-shared key), which hackers could potentially break by running through possible passwords until they can confirm the WPA2 handshake using a guessed password. This is called a Dictionary Attack. For this reason, most of our enterprise customers implement WPA2 with 802.1X, which is not prone to dictionary attacks. While some people claim Wi-Fi is insecure, pointing to poorly implemented networks that deactivate all password protection (e.g., a local coffee shop), this is not representative of enterprise practices. Still, the Wi-Fi industry developed WPA2 15 years ago, at a time when the wireless, computing, and security landscapes were substantially different.

Recently, the Wi-Fi Alliance standards body responded with WPA3—a significant update to Wi-Fi security. Aruba is a leader in the development of WPA3. WPA3 implementations fall into one of three categories: (1) OWE (“Enhanced Open”), which encrypts traffic to prevent snooping attacks on open networks that do not have password protection, (2) WPA3-Personal, which uses a shared-secret and a cryptographically stronger key exchange that is resistant to dictionary attacks, and (3) WPA3-Enterprise, which significantly strengthens enterprise-grade 802.1X and optionally includes the same Suite B/CSNA crypto algorithms used for Top-Secret or higher classified government networks. Unlike 5G, which is not backward-compatible and requires completely new handsets and radio networks, customers can upgrade the software on most of Aruba’s currently deployed Wi-Fi networks to include WPA3 (unless they are implementing Suite B/CNSA). We expect major handset OS vendors, such as Apple and Google, to roll out WPA3 and Enhanced Open by the end of 2019. WPA3 certification will be mandatory for all new Wi-Fi 6 equipment starting later this year.

It’s also worth noting that cellular encryption generally has lagged Wi-Fi encryption. For example, LTE encryption is based on an algorithm that uses a 64-bit key length, while WPA2-AES encryption, part of Wi-Fi since 2004, uses 128-bit encryption. 5G uses 128-bit encryption and may, in a future release of the 5G standard, upgrade to 256-bit encryption. Wi-Fi already supports 256-bit encryption through the Suite B/CNSA extensions of WPA3.

Until this point, we’ve highlighted the evolution and current state of authentication, encryption, and key management for cellular and Wi-Fi standards. These are important security design elements. But it’s also important to consider a customer’s ability to tailor its networks to suit its needs by applying specific security and compliance tools and policies. The average security buyer at a large enterprise uses more than 50 different security and compliance tools, and no two organizations have exactly the same needs. Our customers have been successfully deploying their chosen security and policy tools to enterprise Wi-Fi networks for decades. The architecture of these networks is flexible and allows customers to break out, analyze, and apply policy to traffic. Wi-Fi 6 and WPA3 completely retain this flexibility.

5G is a different story. If an enterprise wants to replace Wi-Fi with 5G, there are a few different approaches. Each has implications for security customization.

  • The first approach is to extend macro 5G service into the enterprise using DAS (Distributed Antenna Systems) or small cells. With this approach, it is difficult to break out traffic and implement specific security solutions. In other words, you get what you get.
  • If your company is large enough, and your service provider is willing to sell and manage an individualized Network Slice, you could buy a slice specific to your company. Network slicing enables carriers to create customized virtual network overlays under one nationwide, physical network. With slicing, they can tune each of these virtual networks to serve business cases that require specific network characteristics. Your service provider may sell a low-latency network slice, or an IoT-oriented network slice. You could then have the service provider apply specific security solutions to that slice and possibly even manage it for you, as a part of their network. But all traffic passing over such a slice would be invisible to security appliances that are wired directly into an enterprise network.
  • Your enterprise could choose to deploy a private 5G network on your premises, using either spectrum licensed from a service provider, or possibly other spectrum that is unlicensed (e.g., CBRS spectrum). You can apply security to a private 5G network in a similar way you can apply it to an enterprise Wi-Fi network, but this requires investing in completely separate, parallel network infrastructure. Consequently, this approach will likely be limited to very specific enterprise use cases.

Security is not a monolithic consideration. It includes elements like authentication, encryption, and key management. For well-designed and deployed networks, we believe these elements for Wi-Fi 6 are equal to, or better than, 5G. An equally important consideration is the ability of an enterprise to apply the specific security and policy tools to their network in a flexible way, tailored to its needs. Wi-Fi enterprise networks are highly flexible, as they always have been. But depending on the deployment approach for a 5G network, it may or may not be able to accommodate the level of security and compliance customization required by enterprise customers.

Aruba Executive Perspectives on 5G and Wi-Fi 6
Jeff Lipton: Making Sense of 5G and Wi-Fi in the Enterprise

Stuart Strickland: What is 5G?

Stuart Strickland: Wi-Fi as the On-Ramp to 5G