Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Friday, March 10, 2017

Addressing pain points in governance, risk and compliance (GRC)


In this day and age, it seems as though every business has some form of alphabet soup or acronym salad that shapes the decisions they make as it pertains to their information security programs. Between data privacy laws, regulations on the financial industry, calls for a healthcare focused cybersecurity framework, and regular updates to the PCI DSS, the ever-growing need for a well-established information security program is apparent.

As enterprises exercise their appetite for risk, their ability to assure the board of directors (and inherently the shareholders) that the appropriate controls are in place to protect their critical information and assets is crucial. The days of setting, forgetting, and burying our heads in the proverbial sand are long past. Accountable parties are under ever-increasing pressure to validate the effectiveness of the programs they have in place and provide actionable assurances that due care was taken.

Where is this heading?

We understand the motivations, the want, and the need, yet the reality of the situation doesn’t always align with what we would expect. Cybercrime is not just the elephant in the room; it’s the elephant in the room that’s been tagged with a Banksy-esque portrayal of modern gangsters kicking back and laughing. Criminal organizations are swelling like a tidal wave that is crashing down on the corporate landscape, yet many businesses are still operating under a reactive as opposed to proactive methodology when it comes to their Information Technology/Information Security (IT/IS) GRC needs. Perhaps this is because we have yet to see a nation-wide regulation mandate that controls across multiple business verticals instead of specific industry-related specifications.

Now we combine that reactive approach to traditional spreadsheet-based GRC with understaffed, over-used personnel. Too often these employees are slammed with audits out of nowhere—from business leaders who trickle down high-level policies such as “We’re gonna be ISO certified”—without truly understanding the workloads they just tossed down the org-chart. The elephant grows. How can one or two people in an enterprise tackle the elephant in the room and drag it outside where it belongs?

Give me some hope

It is likely that the challenges and pain derived from GRC activities will continue to grow, which will further motivate market trends that we are already seeing. In the IT/IS GRC market segment, my clients face a lack of time to dedicate towards keeping up with the rapidly changing onslaught of privacy and data security regulations.

As I hinted above, it is good that governments are impressing a need to protect the private information trusted unto businesses by its customers. However, those businesses will continue to be burdened, either through time sink or fines, by this trend.

In addition to the external changes shaping the internal governance policies that businesses put into place, the IT/IS systems within enterprise architectures are in a state of regular flux. It is rare that a system is in a static state for any significant period, and with every change, the same question must be asked: “Is the current machine state compliant?” Answering this question becomes its own burden, without the correct tools in place, and any manual tracking in a spreadsheet becomes impossible at a certain point.

Still waiting for that hope…

Thankfully, we are living in a time where the options available for GRC tools are growing. The market was traditionally dominated by large scale—and expensive—systems. We are now seeing disruptive companies entering and offering reasonable alternatives to the status quo. However, as with any tool selection, there is a fair amount of vendor fatigue that can come from evaluation.

It is best to have a short list of what you want to get out of this investment. When navigating the path of GRC vendor courtship, I advise to check off as many as the following boxes as possible:

Affordability – Ask yourself, “is this affordable?” Not everyone can afford a high-end global enterprise-class implementation, but most organizations will benefit from a tool.

Mitigation, Remediation, and Delegation – Does the tool support tracking of remediation efforts, risk analysis processes, and an ability to seamlessly delegate accountability to system owners for remediation and mitigation of identified risks?

Streamlined Vendor Risk Management – Can this tool help reduce the probability of a Target-like breach by giving you the ability to semi-automate the evaluation of a third-party vendor’s risk profile?

Policy Libraries – Does the tool support dynamic updates of policies within a library to ease the burden of manually tracking changes to governing regulations, standards, and other best practice publications?

Policy Mapping – Can internal policies be easily mapped or overlaid with regulating policies or standards such as HIPAA, COBIT, ISO, etc.?

Views – Can multiple views be established for critical visibility to information that is reasonably valuable for multiple business organizations within your enterprise?

The end goal with the implementation of any tool is to streamline the general day-to-day processes of GRC activities, support collaborative efforts between departments, and offer a central repository for documentation that validates compliance with both internal policies and external regulatory governance. The key part is the collaborative portion. An effective GRC disciple requires a company-wide buy-in. The easier you make it for your colleagues, the easier you make it for yourself. That way, when the time comes to jump into the next audit wave, you can prove once and for all that GRC isn’t just another four-letter word.

Wednesday, March 8, 2017

Six best practices for managing cyber alerts


Security professionals know that the number of cyber alerts is growing at a frantic pace. Even a mid-sized company can face tens of thousands of alerts every month. As the 2011 Target breach demonstrated, failing to investigate alerts adequately and responding to them effectively can have serious consequences for a business as well as its customers.

Ignoring alerts is not an option, so how can busy professionals help their staff members manage the increasing volume without jeopardizing the security of the organization? Here is a list of six best practices that can help.

1. Automate, collaborate and streamline responses

By automating responses to routine issues, security analysts have more time to devote to priority issues that present the most risk to the company. Security analysts are then free to focus on keeping the company secure rather than manually processing tickets, assigning them and tracking progress.

Automating collaboration allows senior staff members to provide additional training to new hires in a relaxed, productive manner. Bottom line: All staff members become more efficient with automation and collaboration.

2. Guard against employee fatigue

Every day, three staff members face 300 or more alerts, according to a study by the industry analyst firm IDC. The study also found that 500 hours per month were spent responding to alarms at more than 35 percent of the respondents’ companies. In short, security professionals are exhausted, giving hackers an unfair advantage.

Hiring more security staff is not always a solution. Instead, a well-organized team focused on priority issues ensures the least amount of burn-out. For smaller companies, partnering with an outside managed security services provider may be the best approach.

3. Harness the power of big data for behavioral analytics

Analytics that detect suspicious activity without assigning priorities are yielding to behavioral analytics that use historical data to determine what is normal for the company.

4. Develop an incident response plan and keep it updated

Be sure to include procedures for handling false positives, duplicates, and assigning alerts to staff members for evaluation and tracking progress. Automation saves significant time in these areas, but the plan should also define the types of incidents to be handled by key personnel.

5. Automate the common analysis of logged activity

An alert may not signify an infection. But, the only way to know if a device is infected is to correlate the alert with additional logged activity and look at other sources of information like threat feeds. By the time the step has been handled manually, the infection could worsen. Automating this step alone could save many hours of work for every alert handled by a staff member.

6. Remember the human factor

Experts estimate that 95 percent of all data breaches can be attributed to human error or recklessness. Ensuring that users are properly trained on security measures may help reduce the number of alerts. However, it’s also crucial that staff members receive proper training that stresses the importance of investigating all alerts. Security experts who feel overworked and unappreciated may develop a belief that they only need to investigate a small part of the alerts to fulfill their obligations.

Hackers will always be on the cutting edge of technology. And attacks will continue to increase across all organizations, both large and small. Therefore, developing an effective strategy for handling alerts will ensure future scalability when dealing with the volume generated.


Tuesday, March 7, 2017

The six stages of a cyber attack lifecycle


The traditional approach to cybersecurity has been to use a prevention-centric strategy focused on blocking attacks. While important, many of today’s advanced and motivated threat actors are circumventing perimeter based defences with creative, stealthy, targeted, and persistent attacks that often go undetected for significant periods of time.

In response to the shortcomings of prevention-centric security strategies, and the challenges of securing an increasingly complex IT environment, organisations should be shifting their resources and focusing towards strategies centred on threat detection and response. Security teams that can reduce their mean time to detect (MTTD) and mean time to respond (MTTR) can decrease their risk of experiencing a high-impact cyber incident or data breach.

Fortunately, high-impact cyber incidents can be avoided if you detect and respond quickly with end-to-end threat management processes. When a hacker targets an environment, a process unfolds from initial intrusion through to eventual data breach, if that threat actor is left undetected. The modern approach to cybersecurity requires a focus on reducing MTTD and MTTR where threats are detected and killed early in their lifecycle, thereby avoiding downstream consequences and costs.

Cyber attack lifecycle steps

The typical steps involved in a breach are:

Phase 1: Reconnaissance – The first stage is identifying potential targets that satisfy the mission of the attackers (e.g. financial gain, targeted access to sensitive information, brand damage). Once they determine what defences are in place, they choose their weapon, whether it’s a zero-day exploit, a spear-phishing campaign, bribing an employee, or some other.

Phase 2: Initial compromise – The initial compromise is usually in the form of hackers bypassing perimeter defences and gaining access to the internal network through a compromised system or user account.

Phase 3: Command & control – The compromised device is then used as a beachhead into an organisation. Typically, this involves the attacker downloading and installing a remote-access Trojan (RAT) so they can establish persistent, long-term, remote access to your environment.

Phase 4: Lateral movement – Once the attacker has an established connection to the internal network, they seek to compromise additional systems and user accounts. Because the attacker is often impersonating an authorised user, evidence of their existence can be hard to see.

Phase 5: Target attainment – At this stage, the attacker typically has multiple remote access entry points and may have compromised hundreds (or even thousands) of internal systems and user accounts. They deeply understand the aspects of the IT environment and are within reach of their target(s).

Phase 6: Exfiltration, corruption, and disruption – The final stage is where cost to businesses rise exponentially if the attack is not defeated. This is when the attacker executes the final aspects of their mission, stealing intellectual property or other sensitive data, corrupting mission-critical systems, and generally disrupting the operations of your business.

The ability to detect and respond to threats early on is the key to protecting a network from large-scale impact. The earlier an attack is detected and mitigated, the less the ultimate cost to the business will be. To reduce the MTTD and MTTR, an end-to-end detection and response process—referred to as Threat Lifecycle Management (TLM) needs to be implemented.

Threat lifecycle management

Threat Lifecycle Management is a series of aligned security operations capabilities and processes that begins with the ability to “see” broadly and deeply across the IT environment, and ends with the ability to quickly mitigate and recover from a security incident.

Before any threat can be detected, evidence of the attack within the IT environment must be visible. Threats target all aspects of the IT infrastructure, so the more you can see, the better you can detect. There are three principle types of data that should have focus, generally in the following priority; security event and alarm data, log and machine data, forensic sensor data.

While security event and alarm data is typically the most valuable source of data for a security team, there can be a challenge in rapidly identifying which events or alarms to focus on. Log data can provide deeper visibility into an IT environment to illustrate who did what, when and where. Once an organisation is effectively collecting their security log data, forensic sensors can provide even deeper and broader visibility.

Once visibility has been established, companies can detect and respond to threats. Discovery of potential threats is accomplished through a blend of search and machine analytics. Discovered threats must be quickly qualified to assess the potential impact to the business and the urgency of response efforts. When an incident is qualified, mitigations to reduce and eventually eliminate risk to the business must be implemented. Once the incident has been neutralised and risk to the business is under control, full recovery efforts can commence.

By investing in Threat Lifecycle Management, the risk of experiencing a damaging cyber incident or data breach is greatly reduced. Although internal and external threats will exist, the key to managing their impact within an environment and reducing the likelihood of costly consequences is through faster detection and response capabilities.


My Blog List

Networking Domain Jobs