Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Thursday, June 9, 2016

Best Linux Command-Line Tools For Network Engineers



These Linux utilities come in handy when designing, implementing or troubleshooting a network.

Trends like open networking and adoption of the Linux operating system by network equipment vendors require network administrators and engineers to have a basic knowledge of Linux-based command-line utilities.
When I worked full-time as a network engineer, my Linux skills helped me with the tasks of design, implementation, and support of enterprise networks. I was able to efficiently collect information needed to do network design, verify routing and availability during configuration changes, and grab troubleshooting data necessary to quickly fix outages that were impacting users and business operations. Here is a list of some of the command-line utilities I recommend to network engineers.

NMAP

Nmap is the network security scanner of choice. It can give you useful information about what’s running on network hosts. It’s also so famous that it has been featured in many movies. With Nmap you can, for example, scan and identify open and filtered TCP/IP ports, check what operating system is running on a remote host, and perform a ping sweep on an IP subnet or range.
List open ports on a host
Knowing which TCP/IP ports of a host are listening for incoming connections is crucial, especially when you’re hardening a server or locking down network equipment. Nmap allows you to quickly verify that; just run the Nmap command followed by the hostname or fully qualified domain name.

In this example, we have host 10.1.10.1 with MAC address C4:04:12:BE:1A:2C and open ports 80 and 443.
Some useful options are:
-O                    Enable operating system detection
-p                     Port range (e.g. -p22-123)
-sP                   Ping sweep of a subnet (e.g. 192.168.0.0/24) or range of hosts

Ping sweep on a IPv4 subnet
Ping sweeps are great for creating an inventory list of hosts in a network. Use this technique with caution and don’t simply scan the entire 10.0.0.0/8 subnet. Rather, go subnet per subnet (e.g. 10.1.1.0/24). I used this option many times while replacing the routers at large sites. I would create an IP inventory list before and after my configuration change to make sure that all the hosts would see the new gateways and could reach the outside world.

Real-time ping sweeps
Do you want a real-time ping sweep of a subnet? The following bash script will continuously execute a ping sweep to subnet 192.168.1.0/24 every five seconds. To exit the command, just hit CTRL-C.
while [ `clear` ]; do nmap -sP 192.168.1.0/24; sleep 5; done

TCPDUMP

Tcpdump is the tool that you want to use to analyze traffic sourced or destined to your own host or to capture traffic between two or more endpoints (also called sniffing). To sniff traffic, you will need to connect the host running tcpdump to a SPAN port (also called port mirroring), a hub (if you can still find one), or a network tap. This will allow you to intercept and process all captured traffic with tcpdump. Just execute the command with the -i option to select what interface to use (eth0), and the command will print all traffic captured:
tcpdump -i eth0
Tcpdump is a great utility to troubleshoot network and application issues. For example, at remote sites connected with IPsec tunnels back to the main site, I was often able to figure out why some applications would make it through the tunnel and some wouldn’t. Specifically, I noticed that applications using the entire IP payload and also enabling the DF (don't fragment) setting, would fail.
The root cause was that the addition of the IPsec header, required by the VPN tunnel, would cause the overall packet to be larger than the maximum transmission unit (MTU) allowed to pass through the tunnel. As result, the router was discarding these oversized packets and sending back ICMP packets with the “Can't Fragment Error” code. This is something I discovered while listening to the wire with tcpdump.
Here are some basic options that you should know about when using tcpdump:
tcpdump src 192.168.0.1
Capture all traffic from host 192.168.0.1
tcpdump dst 192.168.0.1
Capture all traffic destined to host 192.168.0.1
tcpdump icmp
Capture all ICMP traffic
tcpdump src port 80
Capture all traffic sourced from port 80
tcpdump dst port 80
Capture all traffic destined to port 80

IPERF

Use iperf  to assess the bandwidth available between two computers. You can choose between TCP or UDP traffic and set the destination port, bandwidth rate (if UDP is selected), DSCP marking, and TCP window size. The UDP iperf test can also be used to generate multicast traffic and test your PIM infrastructure.
I’ve used iperf many times to troubleshoot bandwidth issues, verify whether the ISP would honor the DSCP marking, and estimate the jitter value of VoIP traffic.

HPING3

Hping3 is a utility command very similar to ping, with the difference that it can use TCP, UDP, and RAW-IP as transport protocols. Hping3 allows you to not only test whether a specific TCP/IP port is open, but also measure the round-trip time. For example, if you want to test whether google.com has port 80 open and measure the round-trip time, you can type:

Here are the options I used:

-S                    Set the SYN tcp flag
-V                    Enable verbose output and display more information about the replies
-p                     Set the TCP/IP destination port

NETCAT

Netcat (nc) is the network engineer’s Swiss Army knife. If you want to be the MacGyver of your network, you must know the basics of netcat. If you use it in client mode, it’s similar to telnet; you can create a TCP connection to a specific port and send anything that you type. You can also use it to open a TCP/IP port and read from standard input. That makes it an easy way to transfer files between two computers. Another use case is testing whether your firewall is blocking certain traffic. For example, execute netcat in server mode on a host behind your firewall and then execute netcat in client mode from outside the firewall. If you can read on the server whatever you type on the client, then the firewall is not filtering the connection.
nc -l -p 1234
This executes netcat in server mode on port 1234 and waits for incoming connections
nc destination_host 1234
This executes netcat in client mode and connects to TCP port 1234 on remote host destination_host
You can also use netcat with pipe commands. For example, you can compress a file before sending it to the remote host with netcat:
tar cfp - /some/dir | compress -c | nc -w 3 othermachine 1234
I hope this blog post provided some useful Linux tricks that will make your life easier. If you have other Linux command line utilities in your toolbox, please feel free to share them in the comment section below.

Wednesday, June 8, 2016

Cognitive Computing


https://en.wikipedia.org/wiki/Cognitive_computing
http://fortune.com/2016/02/25/ibm-sees-better-days-ahead/

*****

Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works. The goal of cognitive computing is to create automated IT systems that are capable of solving problems without requiring human assistance.

Cognitive computing systems use machine learning algorithms. Such systems continually acquire knowledge from the data fed into them by mining data for information. The systems refine the way they look for patterns and as well as the way they process data so they become capable of anticipating new problems and modeling possible solutions.

Cognitive computing is used in numerous artificial intelligence (AI) applications, including expert systems, natural language programming, neural networks, robotics and virtual reality. The term cognitive computing is closely associated with IBM’s cognitive computer system,Watson.

*****

Artificial intelligence has been a far-flung goal of computing since the conception of the computer, but we may be getting closer than ever with new cognitive computing models.

Cognitive computing comes from a mashup of cognitive science — the study of the human brain and how it functions — and computer science, and the results will have far-reaching impacts on our private lives, healthcare, business, and more.

What is cognitive computing?

The goal of cognitive computing is to simulate human thought processes in a computerized model. Using self-learning algorithms that use data mining, pattern recognition and natural language processing, the computer can mimic the way the human brain works.

While computers have been faster at calculations and processing than humans for decades, they haven’t been able to accomplish tasks that humans take for granted as simple, like understanding natural language, or recognizing unique objects in an image.

Some people say that cognitive computing represents the third era of computing: we went from computers that could tabulate sums (1900s) to programmable systems (1950s), and now to cognitive systems.

These cognitive systems, most notably IBM IBM +0.33%’s Watson, rely on deep learning algorithms and neural networks to process information by comparing it to a teaching set of data.  The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex “tree” of decisions the computer can make to arrive at an answer.

What can cognitive computing do?

For example, according to this TED Talk video from IBM, Watson could eventually be applied in a healthcare setting to help collate the span of knowledge around a condition, including patient history, journal articles, best practices, diagnostic tools, etc., analyze that vast quantity of information, and provide a recommendation.

The doctor is then able to look at evidence-based treatment options based on a large number of factors including the individual patient’s presentation and history, to hopefully make better treatment decisions.

In other words, the goal (at this point) is not to replace the doctor, but expand the doctor’s capabilities by processing the humongous amount of data available that no human could reasonably process and retain, and provide a summary and potential application.

This sort of process could be done for any field in which large quantities of complex data need to be processed and analyzed to solve problems, including finance, law, and education.

These systems will also be applied in other areas of business including consumer behavior analysis, personal shopping bots, customer support bots, travel agents, tutors, security, and diagnostics.  Hilton Hotels recently debuted the first concierge robot, Connie, which can answer questions about the hotel, local attractions, and restaurants posed to it in natural language.

The personal digital assistants we have on our phones and computers now (Siri and Google GOOGL +1.57% among others) are not true cognitive systems; they have a pre-programmed set of responses and can only respond to a preset number of requests.  But the time is coming in the near future when we will be able to address our phones, our computers, our cars, or our smart houses and get a real, thoughtful response rather than a pre-programmed one.

As computers become more able to think like human beings, they will also expand our capabilities and knowledge. Just as the heroes of science fiction movies rely on their computers to make accurate predictions, gather data, and draw conclusions, so we will move into an era when computers can augment human knowledge and ingenuity in entirely new ways.

My Blog List

Networking Domain Jobs