Friday, September 16, 2011

Deep Packet Inspection: don’t mix-up content inspection and network analysis


Deep Packet Inspection (DPI) is a term widely used in the cyber security area, but which has two different meanings depending on the context where the DPI is used, whether in the content inspection function or in the network analysis function.
Because I see a lot of confusion in the market between the two functions, I thought it would be useful to bring some clarification.

Deep Packet Inspection is a technology used to inspect packets circulating over the network by not only looking at the headers, but also in the packet payload. This being said, you can look in the packet payload to find different nature of information.

1)    Content inspection: in this context DPI is used to look for virus or malware signatures that could be embedded in flows (packets, email or documents received by a user). The DPI will look for specific patterns and match it against a list of known malicious patterns. This is done using pattern matching algorithms and regular expression functions.
2)    Network Analysis: in this context DPI is used to identify protocol and applications used on a network. This requires pattern matching, but also more complex protocol grammar analysis and statistical analysis. The advanced form of DPI will also extract metadata from flows, like sender an receiver of an email.

So we see that DPI is used to fulfill 2 different functions. They are complementary functions, and there is no sense comparing the features and performance of a Content Inspection engine and a Network Analysis engine even if they both use Deep Packet Inspection.
An advanced cyber security product design should embed both: Network Analysis to enable application aware firewall and normalized content extraction; content inspection to seach for virus signature in a given content normalized by Network Analysis.

The chart below shows the difference between the 2 categories of DPI implementation.
 

Content inspection
Network Analysis
Method
DPI: Inspect the Content of the packets/flows and not only the headers
Objective / features
Detect 100k’s of virus/file signatures inside documents
Recognize & analyze protocols and Applications
Fully decode a protocol to export metadata
How it works
Lexer: Detect patterns / regular expressions
Parser: Multiple algorithms used such as pattern matching, flow correlation, behavior analysis
Implementation
Can be software (PCRE, Sensory Networks) or hardware (Tarari, Netlogic NetL7)
Software only  (e.g. Qosmos ixEngine)
Found in
IDS/IPS/AV
Next generation Firewall, NBAD, Forensics

Thursday, May 26, 2011

NetWitness and ipoque Acquisitions Show That Network Intelligence Technology is Becoming Crucial

Recently, there have been some interesting movements among vendors who leverage DPI and network intelligence technology.

Last month, EMC acquired NetWitness: http://netwitness.com/about/press-releases/2011-emc-acquires-netwitness-corporation. Netwitness is a leading network security analysis vendor who has been using DPI inside their solutions. According to the following article in Network World http://www.networkworld.com/news/2011/032411-emc-netwitness.html, EMC’s strategy is to strengthen RSA’s enVision SIEM offering with "an additional source of network activity" and "another angle on analysis.” This confirms the need for real-time network intelligence for cyber security applications. Will this lead to further M&A in this sector, involving companies like Solera or Niksun? We’ll see. In any case, I think SIEM vendors (Arcsight, Q1 Labs, LogLogic, CA, etc.) will take a closer look at DPI and Network Intelligence technology to enhance their products.

This week, Rohde & Schwarz announced the acquisition of ipoque:http://www.ipoque.de/news-and-events/news. ipoque sells mainly vertical solutions such as their PRX traffic management solution (with DPI inside), but also supply their Protocol and Application Decoding Engine (PADE) to vendors in an OEM model. It will be interesting to observe the strategic evolution of the ipoque product portfolio as part of Rohde & Schwarz’s offering: will they decide to increase focus on QoS solutions? Will they continue to sell PADE?

I think we will soon see more consolidation among vendors who leverage DPI and Network Intelligence technology. These are interesting developments – stay tuned.


Jerome

Friday, April 22, 2011

Is your DPI technology battle-proof?

This is a question that becomes more frequent among product managers and CTOs that I speak to. And surprisingly this a topic on which there is very little information. Probably because it is a complex topic, that requires deep expertise. But you need to be aware that Deep Packet Inspection engines, like any systems, may be circumvented or blocked by malicious actions, or rendered inoperable by extreme traffic conditions.

The discipline of Deep Packet Inspection is not always an exact science. If you run the exact same traffic through 3 different brands of DPI equipment, you will get 3 different results. Why is this?

This result of a DPI analysis will depend on:

1)Deliberate actions to hide using the numerous opportunities given by non-standard, complex, decentralized network; for example people may use tunnels, or change the shape of their packets in order to by-pass a DPI system designed to handle only “normal” packet shapes. Some DPI systems may detect this behavior, some may not. Also, deliberate attacks on servers may alter the way a DPI engine performs even if the engine itself is not targeted; how would your DPI engine perform during a SYNFlood attack?

2)Accidental causes deriving from traffic conditions, configuration and bugs in network devices, mis-configured networks etc. For example a server configured in a “byte by byte mode”, would send the “GET” method used in the HTTP protocol in 3 different packets (one for G, one for E and one for T). But a traditional DPI engine would look for the “GET” pattern into a single packet, which means it is unable to detect the HTTP protocol. And this is just a very basic example of use case where basic DPI is ineffective. Here again, some DPI systems have been designed to cope with malformed traffic, some cannot.

The good news is that this not inevitable. Because reverse engineering protocols and applications means working in real-life traffic conditions, decoding both standard and malformed traffic, there is always a solution to accurately detect each networked event. But this requires considerable investment in building DPI software which is resilient, robust and reliable.

Many DPI engines not to pay sufficient attention to this topic, which could result potential performance and security issues. This is obviously a key concern for cyber security applications that could be weakened, but also for all applications that require accuracy and data quality such as charging or parental control.

Working on resiliency, robustness and reliability is an ongoing effort, and should be top of mind for Deep Packet Inspection developers and product managers.

Jerome

Tuesday, February 8, 2011

Security within an evolving Internet

This time, I have invited a guest blogger: Pierre Françon , who is the president of Quaelys and a respected IP security expert. Pierre describes the new security challenges created by the coexistence on the Internet of both IPv4 and IPv6.

The Internet, as we know it, is based on IPv4, considered as homogeneous and open. Communications are established end to end. The only exception is the NAT (Network Address Translation) feature, used and controlled on the equipment of the subscriber/end customer (ADSL Box).

The depletion of public IPv4 addresses is accelerating. Therefore the Internet is going to evolve in the very short term (even if some Internet Providers may adopt slower migration then others). We will see a dual Internet based on two similar protocols but still incompatible (IPv4 & IPv6). Technically, the customer will be given two parallel communication channels being usable simultaneously on two independent networks of network at IP level.

The historical way of using IPv4 addresses end to end will cease. Instead Internet Providers will use NAT on their own network. We are talking about Carrier Grade NAT or CGN. This method will request new application gateways for protocols carrying IP addresses such as VoIP-SIP for example. More, the IPv4 traffic collected from the customers ADSL boxes to the CGN will be encapsulated on IPv6 using a method named Dual Stack Lite.
From the security point of view, besides the introduction of IPv6 and the CGN complexity, the real breakdown comes from this duality notion ... and how someone can use it for this own benefit.

This duality is twofold: (1) First, one has at his disposal two simultaneous and separate communications channels on the same technical environment (LAN and Desktop). (2) Second, from both channels one can join simultaneously the same servers or networks, infrastructures ... and targets (CGN, routers, cache, applications servers, desktops...)

In this new context, IPv4 and IPv6 cannot be treated separately. Risk analysis has to take this duality into account, as there is not a lot of IPv6 experience and because it has weakened the IPv4 world (when dual stack). Preventive security mechanisms must also be dual. For example, a spammer using one channel has to be black listed on both channels. It’s not easy as the protocols are different and because the user identification methods are not identical: prefix (full subnet) allocated to the subscriber in IPv6 versus the IP address and port numbers per protocol in IPv4 (the same IP address is shared on the CGN between many subscribers).

In parallel, legal requirements on this new Internet are far more complex. To detect illegal usage, the subscriber behaviour has to be analysed within the duality IPv4/IPv6. Similarly, filtering subscribers or dual web sites becomes more complex: allocated IP addresses based filter versus filter of the user identity authentified at the application level. Gathering evidences of illegal usage can become a big problem: Just imagine a P2P dual tool, where the contents research is made partially on IPv4 sessions and IPv6 sessions when the traffic is routed end to end without NAT.

Confronted to these new challenges, we have to rethink the security of data exchanges, communication infrastructures and end-equipment (servers or desktops). In parallel, putting in place traffic/flow and behaviour analysis on a dual basis requires new tools taking into account the diversity and sophistications of the Internet usage. To summarize, it is really urgent to think and act differently towards in the face of the evovling Internet.

Tuesday, January 25, 2011

Can Network Intelligence Technology Lower the Risk For Cyber War?

I just read the latest article about Stuxnet: The Triumph of Hacker Culture - http://www.slate.com/id/2281938

Here is a quote: “The implications are vastly unsettling. If a Stuxnet-like worm can disable Iranian nuclear manufacturing controls, there is reason to be concerned that a similar or more highly evolved worm (devised by the much-feared Chinese military cyber corps, perhaps) could seize control of our nuclear missile launch-control capacity. Maybe not yet. But the potential can't be ruled out.”

Scary…

For those of you who haven’t followed all the details about Stuxnet, the common theory is the following:

  • Israel + US developed Stuxnet in order to delay Iran nuclear weapons program, since it was deemed less risky than bombing raids
  • Stuxnet is seen in cyber sec / SCADA circles as the first offensive, state-sponsored, weaponized malware of a new generation
  • The fear is that the Pandora box is now open, and that adversaries will retaliate in kind

See here for a Wired article: http://www.wired.com/dangerroom/2011/01/with-stuxnet-did-the-u-s-and-israel-create-a-new-cyberwar-era/

Some people believe that China could be behind Stuxnet: http://blogs.forbes.com/firewall/2010/12/14/stuxnets-finnish-chinese-connection/

In any case, I think we will see more focus on SCADA cyber defense.

What does this mean for Network Intelligence Technology?

Even the new generation weaponized malware uses IP networks to spread itself and communicate. In the case of Stuxnet, "Updates to this executable would be propagated throughout the facility through a peer-to-peer method established by Stuxnet." See http://www.zdnet.com/blog/security/stuxnet-a-possible-attack-scenario/7420?tag=rbxccnbzd1
At Qosmos, we are experts at decoding traffic. If we don’t recognize a protocol, it would be classified as “unknown”, which in itself is highly suspicious in a sensitive environment. A cyber defense solution can be configured to block all such traffic instantly.

Seems that Qosmos can provide the traffic visibility required for defense against new generation malware. It is our way of lowering the risk of cyber war.

JT

Thursday, November 18, 2010

Cyber security artists need the right tools

It has become evident to me that cyber security is an art, and like any other art, it has artists who need the right tools.
At Qosmos, we work with cyber security teams who protect very sensitive networks. These security analysts typically work in a Security Operations Center (SOC), monitoring traffic and checking for suspicious activity, such as:
  • Services or encrypted traffic on non-standard ports 
  • Referring URI, which can be used to detect Phishing software loading partial content from a real site
  • Many (hundreds) of “IP gets” from black-listed countries
  • Specific malware file names (e.g. shell.exe)
  • Suspicious malformed traffic
Best practice cyber security filters out known threats with COTS cyber security products (AV, Firewalls, etc.) and focuses investigation and analyst time on 1% suspicious traffic only. So, what tools do the analysts need?
  • Information feeds in the form of logs and traffic metadata
  • Search and analysis capabilities
Logs are the obvious source of information to investigate potential security breaches. But a recent trend is to complement these logs with communications metadata, representing an additional source of real-time information.

Examples of communications metadata which are relevant for cyber security:

The advantages of metadata:
  • Not only do good metadata complement logs, they are also MORE valuable than full packet payloads to identify patterns! As someone said to me: “sometimes, you can’t see the forest (situational awareness) for the trees (packet payloads)”
  • In addition, metadata require less storage than full packet capture which means that historic info can be kept for longer time periods (months) than full packet capture: this means much stronger investigative capabilities.
  • Metadata also enables much faster forensic search, with the ability to search 2TB of data in less than 2 minutes!
  • Finally, metadata can be used to index flows and packet contents
Example of a best-of-breed cyber security tool case
A tool case can be built on Qosmos + Splunk. In this case, Qosmos does the protocol decoding up to Layer 7, providing complete visibility of all network traffic and applications, independently of ports. The extracted protocol metadata is indexed by Splunk in addition to log information. Splunk is then used for search, statistics and GUI.

Example of Searching for Suspicious Network Activity by using Qosmos + Splunk
 


Let’s give cyber security artists the tools they need to exercise their art!

Jerome

Wednesday, September 8, 2010

Network Intelligence: Coming Back to Qosmos

It is interesting to see the increasing interest for embedding network intelligence software into solutions. At Qosmos, we have been speaking to network equipment suppliers, ISVs and systems integrators for years. Which means that many of them have known us for years. However, during the initial discussions, many of them tell us that they have DPI skills internally, and while Qosmos technology is really impressive, they don’t need to source externally. “No problem”, we say, “but don’t hesitate to contact Qosmos if you change your mind”.
We now see more and more companies coming back to Qosmos.
Why do they come back to us? The reasons are simple:
  • They find it increasing difficult to keep up with ever-changing protocols and applications
  • They face challenges in scaling existing solution to network speeds beyond Gbps
  • Resource constraints force them to focus all their energy on their core business (which is typically to build solutions, not enabling technology like DPI or Network Intelligence)
This is typical for new high-tech markets: initially, high-tech vendors will build everything in-house, because 1) it’s not too difficult and 2) there are no external suppliers. Think of databases: initially, all IT vendors built their own databases in-house (for example IBM DB2). Then vendors moved to source database technology from specialists like Sybase, Informix or Oracle. Same thing for micro-processors, which was initially developed internally by computer vendors, but is now sourced from specialists Intel and AMD.
There is now a similar trend with DPI and network intelligence technology: the market is shaping up for the benefit of everyone.
Welcome back : we are happy to work with you!

JT
 
© 2009 Network Intelligence Technology. All Rights Reserved | Powered by Blogger
Design by psdvibe | Bloggerized By LawnyDesignz