Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Tuesday, October 11, 2016

Humans need not apply: a guide to wealth and work in the age of artificial intelligence




#THINKNEXT - Keynote Speech - Nandan Nilekani - An Alternate View of the Future





Wednesday, October 5, 2016

Here Is A List Of Over 40 Educational Websites Where You Can Get A Free Education



According to www.webometrics.info, there are more than 17 000 universities, but getting a degree in many of them is quite costly. Many students around the world(and their families) get into big debt or have to work over sixty hours a week in order to afford an education.
Two thirds of the US college seniors who graduated in 2011 had student loan debt, with an average of over 27 000 USD per person. Reading those statistics I can’t stop thinking about those words from over 30 years ago:
“With mass education, it turned out that most people could be taught to read and write. In the same way, once we have computer outlets in every home, each of them hooked up to enormous libraries, where you can ask any question and be given answers, you can look up something you’re interested in knowing, however silly it might seem to someone else.” – Isaac Asiov
Isaac Asimov died in 1992, but if he could see the opportunities that the Internet is giving us in XXI century he would probably grin from ear to ear. Getting a degree in an university might be expensive, but there are much better options. There are many websites on the Internet that now offer FREE of charge learning materials. Now even the poorest people could afford to be better educated than many of Harvard’s graduates, all they need is access to a computer(does not even have to be personal one, it could be the one that the local library offers for public use)
Enough with the words, here is a BADASS list of over 40 educational websites:
1. ALISON –  over 60 million lessons and records 1.2 million unique visitors per month
2. COURSERA – Educational website that works with universities to get their courses on the Internet, free for you to use. Learn from over 542 courses.
3. The University of Reddit – The free university of Reddit.
4. UDACITY – Advance your education and career through project-based online classes, mainly focused around computer, data science and mathematics.
5. MIT Open CourseWare – Free access to quite a few MIT courses that are on par with what you’d expect from MIT.
6. Open Culture – Compendium of free learning resources, including courses, textbooks, and videos/films.
7. No Excuse List – Huge list of websites to learn from.
8. Open YALE Courses – Open Yale Courses provides free and open access to a selection of introductory courses taught by distinguished teachers and scholars at Yale University All lectures were recorded in the Yale College classroom and are available in video, audio, and text transcript formats. Registration is not required.
9. Khan Academy – Watch thousands of micro-lectures on topics ranging from history and medicine to chemistry and computer science.


Sunday, October 2, 2016

E-SIM for consumers—a game changer in mobile telecommunications?

Courtesy - Mckinsey


Traditional removable SIM cards are being replaced by dynamic embedded ones. What might this disruption mean for the industry?
Wearable gadgets, smart appliances, and a variety of data-sensor applications are often referred to collectively as the Internet of Things (IoT). Many of these devices are getting smaller with each technological iteration but will still need to perform a multitude of functions with sufficient processing capacity. They will also need to have built-in, stand-alone cellular connectivity. E-SIM technology makes this possible in the form of reprogrammable SIMs embedded in the devices. On the consumer side, e-SIMs give device owners the ability to compare networks and select service at will—directly from the device.

From industry resistance to acceptance

In 2011, Apple was granted a US patent to create a mobile-virtual-network-operator (MVNO) platform that would allow wireless networks to place bids for the right to provide their network services to Apple, which would then pass those offers on to iPhone customers.1Three years later, in 2014, Apple released its own SIM card—the Apple SIM. Installed in iPad Air 2 and iPad Mini 3 tablets in the United Kingdom and the United States, the Apple SIM allowed customers to select a network operator dynamically, directly from the device.
This technology gave users more freedom with regard to network selection. It also changed the competitive landscape for operators. Industry players were somewhat resistant to such a high level of change, and the pushback may have been attributable to the fact that operators so heavily relied on the structure of distribution channels and contractual hardware subsidies. In fundamentally changing the way consumers use SIM cards, Apple’s new technology was sure to disrupt the model at the time.

As a technology, e-SIM’s functionality is similar to that of Apple’s MVNO and SIM, since it also presents users with all available operator profiles. Unlike Apple’s technology, however, e-SIM enables dynamic over-the-air provisioning once a network is selected. Today, the industry is reacting much more favorably. One driver of the shift in sentiment is the recent focus on the push by the GSMA to align all ecosystem participants on a standardized reference architecture in order to introduce e-SIMs. What’s more, machine-to-machine (M2M) applications have used this architecture for built-in SIM cards for several years now with great success.
Consumer devices will require a more dynamic pull mode to request electronic profiles than the passive push mode of M2M technology. This requirement translates into a big incentive for device manufacturers and over-the-top players to support the industry-wide adoption of e-SIM standards. Finally, it is becoming increasingly clear that future consumer wearables, watches, and gadgets should ideally be equipped with stand-alone mobile-network connectivity. Together, these developments have contributed to strong industry support from mobile operators for the GSMA’s Remote SIM Provisioning initiative.
As a result of both the strong growth in the number of M2M and IoT devices and the development of consumer e-SIM specifications by the GSMA, the distribution of e-SIMs is expected to outgrow that of traditional SIM cards over the next several years by a large margin (Exhibit 1).

The GSMA is expected to present the outcome of ongoing alignment negotiations later in 2015. The association announced that “with the majority of operators on board, the plan is to finalize the technical architecture that will be used in the development of an end-to-end remote SIM solution for consumer devices, with delivery anticipated by 2016.”2

Architecture and access

The future standard will most likely require a new or nonprovisioned device to connect to an online service (for example, an e-SIM profile-discovery server) to download an operator profile to the handset. Final details on the e-SIM operating model—including the required components for a provisioning architecture—are being finalized by OEMs, network operators, SIM vendors, and the GSMA.
While no change to the current environment is expected for most of the architecture components, the industry group needs to agree on a solution for how the online discovery service will establish the initial connection between the handset and the profile-generating units. Independent ownership is preferred from a consumer perspective to ensure that all available operator profiles (and tariffs) are made available for selection without the need to state a preference for a specific provider. Enabling over-the-air provisioning of operator profiles requires a standardized architecture with agreed-upon interfaces and protocols across all ecosystem participants.
The use of consumer e-SIMs means that the chipset manufacturers will negotiate with hardware OEMs such as Apple and Samsung directly, and the industry value chain might be reconfigured. The manufacturing and distribution of physical SIM cards becomes (partially) obsolete, although preconfiguration and profile-handling services already form a significant part of the value created for traditional SIM-card vendors. Physical SIM cards, however, are not expected to disappear from the market within the next few years. Instead, a relatively long phase of parallelism between existing SIM technology and the new standard is expected. Countless existing devices will still have to be served continuously, and developing markets, in particular, will have long usage cycles of basic, traditional SIM phones and devices.
Depending on the outcome of the GSMA’s ongoing alignment negotiations, the resulting architecture recommendation might require a slight update of the model described in Exhibit 2, but it is quite likely that the following components will be present.

Profile-generation unit. E-SIM profile generation will take place via the same processes used for SIM profile development. SIM vendors will use authentication details provided by network operators to generate unique network access keys. Rather than storing these details on physical SIM chips, they will be saved in digital form only and will await a request for download triggered by the embedded universal integrated circuit card (e-UICC) in the consumer’s handset.
Profile-delivery unit. The connection between the e-UICC in the device and the profile-generation service is established by the profile-delivery unit, which is responsible for encrypting the generated profile before it can be transmitted to the device. While theoretically, all participants in the new e-SIM ecosystem could operate the profile-delivery service, those most likely to do so will be either the SIM vendors or the mobile network operators (MNOs)—physical and virtual—themselves.
Universal-discovery (UD) server. The UD server is a new key component in the e-SIM architecture; previously, it was not required for provisioning physical SIM cards or M2M e-SIM profiles. In a consumer e-SIM environment, customers will obtain either a device that is not associated with an operator or one that has been preprovisioned. In the former case, they will be required to select a provider, and in the latter, they may have the option to do so. In both cases, the UD plays a pivotal role, as it is responsible for establishing the link between the device and the profile-provisioning units. Consumers would most likely prefer that an independent party be responsible for operator-profile discovery to ensure that all available profiles in a market (with no restrictions on tariffs and operators) are presented without commercial bias.
A possible alternative to a separate UD server might be a model similar to today’s domain-name-server (DNS) service. This would provide the same level of objectivity as the UD, but it would require more intensive communication between all involved servers to ensure that each provides comprehensive profile information.

Stakeholder advantages in an e-SIM environment

Adoption of e-SIM as the standard across consumer devices brings several advantages for most stakeholders in the industry: IoT-enabled product manufacturers (for example, connected-car or wearables manufacturers) would have the ability to build devices with “blank” SIMs that could be activated in the destination country. This functionality would make for easy equipment connectivity and allow manufacturers to offer new products in new market segments.
By adopting e-SIM technology, mobile network operators can benefit from the opportunity to take a leading role in the IoT market. They would also have the ability to provide convergent offers with multiple devices (for instance, the smart car and smart watch) under a single contract with the consumer more conveniently than they would using physical SIM cards.
Consumers benefit from the network-selection option that embedded connectivity technology provides. The ability to change providers easily means that e-SIM customers don’t have to carry multiple SIMs, have full tariff transparency, and can more easily avoid roaming charges.
Mobile-device manufacturers may be able to take control of the relationship with the customer because e-SIM, at least technically, allows for disintermediation of network operators from the end-to-end relationship. E-SIM also frees up valuable device “real estate,” which gives manufacturers an opportunity to develop even more features using the space once occupied by traditional SIM cards.
SIM vendors don’t lose in the e-SIM scenario either. Their competency in security and profile creation positions them to be valuable players in the new ecosystem. Key architecture activities, such as managing the e-SIM-generation service, are among the roles that SIM vendors are uniquely qualified to take on.

E-SIM’s potential impact on channels and operating models

Most network operators have already started initiatives to explore not only the impact of the architectural requirements on the organization—including changes to existing IT systems and processes—but also the potential effect on channels, marketing, and proposition building.
Marketing and sales. Targeting new clients through promotional activities may be as easy as having them sign up by scanning the bar code of a print advertisement and activating the service immediately—without ever meeting a shop assistant or getting a new SIM card. By conveniently adding secondary devices such as e-SIM-enabled wearables and other IoT gadgets to a consumer’s main data plan, operators might improve take-up rates for those services. On the other hand, the ease of use and ease of operator switching has the potential to weaken the network operator’s position in the mobile value chain, as customers may demand more freedom from contractual lock-ins, as well as more dynamic contractual propositions.
Customer touchpoints. The entire customer journey and in-store experience may also be affected. For example, e-SIM eliminates the need for customers to go to a store and acquire a SIM card when signing up for service. Since face-to-face, in-store interactions are opportunities to influence customer decisions, operators will need to assess the potential impact of losing this customer touchpoint and consider new ways to attract customers to their sales outlets.
Logistics. Many services will need to be redesigned, and customer-service and logistics processes will be widely affected. For example, secure communication processes for profile-PIN delivery will be required.
Churn and loyalty. The customer may be able to switch operators and offers (the prepaid client base, at least) more easily, and short-term promotions may trigger network switching. This means that churn between operators in a strong prepaid ecosystem will likely increase. But this does not necessarily mean that a customer who isn’t locked into a contract will churn networks more often or spend less. Consumers may still prioritize a deal that offers a superior user experience and acceptable call quality. Satisfied clients will likely stay with their operator as long as locked-in customers do.
Prepaid versus contract markets. E-SIM’s impact may be greater in markets with more prepaid customers than in markets with a high share of subsidized devices. While device-subsidy levels will remain an important driver of customer loyalty in developed markets, investment in device subsidization is expected to fall dramatically over the next couple of years—from approximately 20 percent of all devices sold to less than 8 percent in 2020 (Exhibit 3).

Disruptive business models enabled by e-SIM

Recently, a number of new business models have developed around e-SIMs. Specifically, dynamic brokerage and potential spot-price platform markets are piquing the interest of the mobile community.
Wholesale service provision. Wholesalers contracting with several network operators in a market could offer a tariff selection without disclosing which network is providing the connectivity. The customer could then be “auctioned” dynamically among network operators for a period of time. Electronic profiles could even be switched among operators seamlessly for the client.
Social-media and Internet-content service providers. The voice services that social-media platforms offer rely on available Wi-Fi connectivity to provide voice services either entirely via a data connection or by using a temporary connection to a cellular network. Call quality depends in part on the seamless switching between those connectivity avenues, and e-SIMs would facilitate smoother “handovers” with dynamic (and automatic) operator selection.
One of the potentially highest-impact and most disruptive new ventures of this type is surely Google’s Project Fi, an MVNO offer, recently launched in the United States, that strives to provide the best available data-network performance on mobile devices by combining mobile data and Wi-Fi connectivity. The decision regarding which network to connect to will be based on the fastest available speed and bandwidth. Additionally, social-media voice services mean that mobile-phone numbers are no longer the only unique client identifiers. A user’s online communication account (for example, Hangouts) is enough to set up a phone call.
New pricing schemes. While most operators already provide mobile Internet telephony services, technically referred to as voice over IP (VoIP) or voice over LTE (VoLTE), many operator tariff schemes still have potential for disruption on the commercial side. In addition to offering competitive rates, new players may further increase margin pressure by including refunds of unused, prepaid minutes in their pricing models. For advertising-centric players or social-media companies entering the MVNO market, the advertising value or additional call-behavior data may even lead to cross-subsidizing offerings in the short term.
Global roaming services. Last but not least, other players are primarily targeting the still-expensive global data-roaming market for end users. Strong global brand power paired with the technology of reprogrammable e-SIMs—supporting over-the-air provisioning of multiple electronic user profiles of global operators—can be turned into easy-to-use offers for global travelers. These transparently priced global roaming services will allow users to choose a local network with a few clicks on the device. Current global roaming offers based on reprogrammable SIMs are priced near the upper end of the market, but providers in emerging markets may soon offer similar services and more competitive global tariff schemes.

The GSMA is working with global network operators to develop a standardized reference architecture for the implementation of e-SIM technology. The process under way may lead to widespread industry adoption of e-SIMs in the very near future.
New entrants and new sales and service models will drive e-SIM’s impact on the mobile-telecommunications market in the next two to five years. Revenue is at stake, and operators’ approaches to new propositions, shared data-tariff portfolios, potential new revenue streams, and handset-subsidy strategies across multiple markets will play a big role in how they fare in the new e-SIM ecosystem. Whether an operator decides to take a role as a smart follower or an early mover, an overall strategy and associated initiatives need to be shaped thoughtfully now.

Thursday, September 22, 2016

Comparison of Routers Cisco, Juniper and Huawei



# Cisco, Juniper and Huawei Router comparison
# CCIE Candidates only
# MPLS requirement routers- High end Routers
# High Back-plane capacity and high port capacity
#Used as Core routers in the MPLS environment
# Cisco 9922- Capacity till 11 Tbps
# Juniper MX2020- Capacity till 80 Tbps
# Huawei NE5000E- Capacity till 6.4 Tbps

So below are the comparison between the various high end models in the market with the high capacity till in Tbps.

These kind of routers are basically used in the Service providers core. so we have many service providers who are providing MPLS, Internet services to the clients.

Service providers like AT&T, BT, Orange, NTT Communications, Verizon, Telefonica, TATA and so on used these kinds of routers in their core networks for high back-plane capacity.

Various features are supported by these kinds of routers like Multi-chassis ether-channel, Non-stop Forwarding, MPLS-TRR, Non-stop routing and various high end features.

Lets discuss the comparison in details as follows:-

Cisco firm will be presented by router Cisco ASR 9922. Given product backs up capacity 11 Tbps and this way it is the most productive product of ASR series.

For comparison in given article firm Juniper presents a model Juniper MX2020, which backs up carrying capacity from 34.4 to 80 Tbps.MX2020 supports a very high density 10GbE, 40GbE and 100GbE, interfaces, and also obsolete methods of connection SONET/SDH, ATM and PDH. It also supports subscribers’ management with wide-band connection, modern opportunities of time synchronizing and virtualization, which meet strict demands of mobile services.
Due to scalable in industry-leading set of opportunities the router MX2020 ideally suited for boarders and consolidated borders and net core and application cores as well.


Huawei represents model Huawei NE5000E. This very model in 2008 afforded the firm Huawei show itself to be arch-rival in the area of Internet backbone.NE5000E presents multi-level protection from the device to the network. For protection on the devices level, NE5000E contains passive plane where all key components of the device are switched in “hot-pluggable” and posses a function of hot back up. The solution contains nonstop address modification (NSF) on the basis of states and hot-swappable and nonstop routing (NSR) on the panel level and on the processes level in all scenario.






Wednesday, August 10, 2016

Use COM- A New Approach to Build Next-Gen Cloud Applications


What is COM? 

COM stands for Container, Open source and Microservice. In late 1990s, .com revolution transformed the way computing was done. Today, containers and microservices are transforming the way cloud computing is done. Microservices architecture is not just an IT buzzword anymore. Many companies have successfully adapted microservices architecture to develop highly scalable cloud applications. Cloud providers like Amazon, Microsoft, Google, etc. are building an ecosystem to facilitate application development using containers and microservices. No wonder open-source community played a significant role in this revolution. If you are wondering how the the next-gen scalable cloud applications will be built, COM approach is one of the most promising alternatives. 

Why COM ?

  • Containers: Build, ship and Run anywhere. A docker container can run on Bare Metal, VM, AWS, Azure, Google, Digital Ocean. Docker1.2 provides native support for Mac and Windows too. 
  • Open source: It's open source, duh!
  • Microservices: Scale, flexibility, system resilience. Monoliths are bad for health!

COM Ecosystem

Building an individual microservice might be easy. Challenge is to integrate all services to build the complete solution. Therefore, for building COM based applications, following aspects need to be considered. 

Containerizing a service: While packaging a service in the container, we must remember that container is NOT a VM. Some of the best practices while building containers are one process per container, ephemeral behavior (can be stopped and a new one can be put in place with an absolute minimum of set-up and configuration). 

Service discovery: COM based cloud application can have hundreds and thousands of containers coming up and going down continuously. Hardcoded IP addresses or traditional DNS based service discovery will not be useful in such scenarios. We need to use a dynamic and scalable service discovery solutions. Open source solutions like zookeper, consule are the suitable for such use cases. 

Service-service communication: For synchronous communication, REST/JSON has been a de-facto standard for past several years. However, other promising alternatives like grpc/protbuf, thrift have emerged. These RPC frameworks take advantage of new HTTP2 (introduced in 2015), binary protocols and bi-direction streaming. For async communication, frameworks such as AMQP, STOMP have been widely used in the industry. 

Monitoring and logging: With hundreds of containers running in the system, we need specialized solutions such as fluetd, datadog, cAdvisor for system monitoring and logging. 

Automation and CI/CD: Building and deploying microservices without proper automation and CI/CD can prove to be the worst nightmare for an engineering team. Apart from traditional deployment techniques, engineering/devops team needs to consider container orchestration and container networking mechanism. 

The following mind map gives an overview of COM ecosystem. This is definitely not an exhaustive list, however it could be a good starting point for exploring various available options.




References



Note : The acronym COM used here is meant only for the purpose of explanation. It is not a standard acronym.

Thursday, June 9, 2016

Best Linux Command-Line Tools For Network Engineers



These Linux utilities come in handy when designing, implementing or troubleshooting a network.

Trends like open networking and adoption of the Linux operating system by network equipment vendors require network administrators and engineers to have a basic knowledge of Linux-based command-line utilities.
When I worked full-time as a network engineer, my Linux skills helped me with the tasks of design, implementation, and support of enterprise networks. I was able to efficiently collect information needed to do network design, verify routing and availability during configuration changes, and grab troubleshooting data necessary to quickly fix outages that were impacting users and business operations. Here is a list of some of the command-line utilities I recommend to network engineers.

NMAP

Nmap is the network security scanner of choice. It can give you useful information about what’s running on network hosts. It’s also so famous that it has been featured in many movies. With Nmap you can, for example, scan and identify open and filtered TCP/IP ports, check what operating system is running on a remote host, and perform a ping sweep on an IP subnet or range.
List open ports on a host
Knowing which TCP/IP ports of a host are listening for incoming connections is crucial, especially when you’re hardening a server or locking down network equipment. Nmap allows you to quickly verify that; just run the Nmap command followed by the hostname or fully qualified domain name.

In this example, we have host 10.1.10.1 with MAC address C4:04:12:BE:1A:2C and open ports 80 and 443.
Some useful options are:
-O                    Enable operating system detection
-p                     Port range (e.g. -p22-123)
-sP                   Ping sweep of a subnet (e.g. 192.168.0.0/24) or range of hosts

Ping sweep on a IPv4 subnet
Ping sweeps are great for creating an inventory list of hosts in a network. Use this technique with caution and don’t simply scan the entire 10.0.0.0/8 subnet. Rather, go subnet per subnet (e.g. 10.1.1.0/24). I used this option many times while replacing the routers at large sites. I would create an IP inventory list before and after my configuration change to make sure that all the hosts would see the new gateways and could reach the outside world.

Real-time ping sweeps
Do you want a real-time ping sweep of a subnet? The following bash script will continuously execute a ping sweep to subnet 192.168.1.0/24 every five seconds. To exit the command, just hit CTRL-C.
while [ `clear` ]; do nmap -sP 192.168.1.0/24; sleep 5; done

TCPDUMP

Tcpdump is the tool that you want to use to analyze traffic sourced or destined to your own host or to capture traffic between two or more endpoints (also called sniffing). To sniff traffic, you will need to connect the host running tcpdump to a SPAN port (also called port mirroring), a hub (if you can still find one), or a network tap. This will allow you to intercept and process all captured traffic with tcpdump. Just execute the command with the -i option to select what interface to use (eth0), and the command will print all traffic captured:
tcpdump -i eth0
Tcpdump is a great utility to troubleshoot network and application issues. For example, at remote sites connected with IPsec tunnels back to the main site, I was often able to figure out why some applications would make it through the tunnel and some wouldn’t. Specifically, I noticed that applications using the entire IP payload and also enabling the DF (don't fragment) setting, would fail.
The root cause was that the addition of the IPsec header, required by the VPN tunnel, would cause the overall packet to be larger than the maximum transmission unit (MTU) allowed to pass through the tunnel. As result, the router was discarding these oversized packets and sending back ICMP packets with the “Can't Fragment Error” code. This is something I discovered while listening to the wire with tcpdump.
Here are some basic options that you should know about when using tcpdump:
tcpdump src 192.168.0.1
Capture all traffic from host 192.168.0.1
tcpdump dst 192.168.0.1
Capture all traffic destined to host 192.168.0.1
tcpdump icmp
Capture all ICMP traffic
tcpdump src port 80
Capture all traffic sourced from port 80
tcpdump dst port 80
Capture all traffic destined to port 80

IPERF

Use iperf  to assess the bandwidth available between two computers. You can choose between TCP or UDP traffic and set the destination port, bandwidth rate (if UDP is selected), DSCP marking, and TCP window size. The UDP iperf test can also be used to generate multicast traffic and test your PIM infrastructure.
I’ve used iperf many times to troubleshoot bandwidth issues, verify whether the ISP would honor the DSCP marking, and estimate the jitter value of VoIP traffic.

HPING3

Hping3 is a utility command very similar to ping, with the difference that it can use TCP, UDP, and RAW-IP as transport protocols. Hping3 allows you to not only test whether a specific TCP/IP port is open, but also measure the round-trip time. For example, if you want to test whether google.com has port 80 open and measure the round-trip time, you can type:

Here are the options I used:

-S                    Set the SYN tcp flag
-V                    Enable verbose output and display more information about the replies
-p                     Set the TCP/IP destination port

NETCAT

Netcat (nc) is the network engineer’s Swiss Army knife. If you want to be the MacGyver of your network, you must know the basics of netcat. If you use it in client mode, it’s similar to telnet; you can create a TCP connection to a specific port and send anything that you type. You can also use it to open a TCP/IP port and read from standard input. That makes it an easy way to transfer files between two computers. Another use case is testing whether your firewall is blocking certain traffic. For example, execute netcat in server mode on a host behind your firewall and then execute netcat in client mode from outside the firewall. If you can read on the server whatever you type on the client, then the firewall is not filtering the connection.
nc -l -p 1234
This executes netcat in server mode on port 1234 and waits for incoming connections
nc destination_host 1234
This executes netcat in client mode and connects to TCP port 1234 on remote host destination_host
You can also use netcat with pipe commands. For example, you can compress a file before sending it to the remote host with netcat:
tar cfp - /some/dir | compress -c | nc -w 3 othermachine 1234
I hope this blog post provided some useful Linux tricks that will make your life easier. If you have other Linux command line utilities in your toolbox, please feel free to share them in the comment section below.

Wednesday, June 8, 2016

Cognitive Computing


https://en.wikipedia.org/wiki/Cognitive_computing
http://fortune.com/2016/02/25/ibm-sees-better-days-ahead/

*****

Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works. The goal of cognitive computing is to create automated IT systems that are capable of solving problems without requiring human assistance.

Cognitive computing systems use machine learning algorithms. Such systems continually acquire knowledge from the data fed into them by mining data for information. The systems refine the way they look for patterns and as well as the way they process data so they become capable of anticipating new problems and modeling possible solutions.

Cognitive computing is used in numerous artificial intelligence (AI) applications, including expert systems, natural language programming, neural networks, robotics and virtual reality. The term cognitive computing is closely associated with IBM’s cognitive computer system,Watson.

*****

Artificial intelligence has been a far-flung goal of computing since the conception of the computer, but we may be getting closer than ever with new cognitive computing models.

Cognitive computing comes from a mashup of cognitive science — the study of the human brain and how it functions — and computer science, and the results will have far-reaching impacts on our private lives, healthcare, business, and more.

What is cognitive computing?

The goal of cognitive computing is to simulate human thought processes in a computerized model. Using self-learning algorithms that use data mining, pattern recognition and natural language processing, the computer can mimic the way the human brain works.

While computers have been faster at calculations and processing than humans for decades, they haven’t been able to accomplish tasks that humans take for granted as simple, like understanding natural language, or recognizing unique objects in an image.

Some people say that cognitive computing represents the third era of computing: we went from computers that could tabulate sums (1900s) to programmable systems (1950s), and now to cognitive systems.

These cognitive systems, most notably IBM IBM +0.33%’s Watson, rely on deep learning algorithms and neural networks to process information by comparing it to a teaching set of data.  The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex “tree” of decisions the computer can make to arrive at an answer.

What can cognitive computing do?

For example, according to this TED Talk video from IBM, Watson could eventually be applied in a healthcare setting to help collate the span of knowledge around a condition, including patient history, journal articles, best practices, diagnostic tools, etc., analyze that vast quantity of information, and provide a recommendation.

The doctor is then able to look at evidence-based treatment options based on a large number of factors including the individual patient’s presentation and history, to hopefully make better treatment decisions.

In other words, the goal (at this point) is not to replace the doctor, but expand the doctor’s capabilities by processing the humongous amount of data available that no human could reasonably process and retain, and provide a summary and potential application.

This sort of process could be done for any field in which large quantities of complex data need to be processed and analyzed to solve problems, including finance, law, and education.

These systems will also be applied in other areas of business including consumer behavior analysis, personal shopping bots, customer support bots, travel agents, tutors, security, and diagnostics.  Hilton Hotels recently debuted the first concierge robot, Connie, which can answer questions about the hotel, local attractions, and restaurants posed to it in natural language.

The personal digital assistants we have on our phones and computers now (Siri and Google GOOGL +1.57% among others) are not true cognitive systems; they have a pre-programmed set of responses and can only respond to a preset number of requests.  But the time is coming in the near future when we will be able to address our phones, our computers, our cars, or our smart houses and get a real, thoughtful response rather than a pre-programmed one.

As computers become more able to think like human beings, they will also expand our capabilities and knowledge. Just as the heroes of science fiction movies rely on their computers to make accurate predictions, gather data, and draw conclusions, so we will move into an era when computers can augment human knowledge and ingenuity in entirely new ways.

Wednesday, January 13, 2016

Huawei Threatens Cisco With Application Driven Networking


Huawei is pointing a gun straight at Cisco with a new strategy it's calling Application Driven Networking (ADN), designed to give comms networks the flexibility required for New IP applications.

Huawei Technologies Co. Ltd. announced its Application Driven Networking (ADN) without a lot of fanfare early last month. It sounds at first like a "me too" strategy that copies Cisco's similarly named Application Centric Infrastructure (ACI). (See Huawei Unveils Application-Driven Network Vision.)

But ADN is more than just a marketing campaign, as Ayush Sharma, senior VP and IP CTO for Huawei, explains it. It's an architecture that uses 5G principles to build networks with the flexibility needed for the different requirements of traditional voice, Internet and machine-to-machine communications.

While Cisco is building a virtual network overlay on top of MPLS and other traditional network components, Huawei is using open source, SDN, NFV and the 5G principles of network slicing to build programmable, network-agnostic networks on top of any media, including copper, fiber and wireless and LTE, Sharma tells Light Reading.

Carriers need to go beyond the old, simplistic definition of five-nines of reliability to meet the different demands of different kinds of applications, Sharma says.

For example, traditional voice networks follow the "Poisson distribution model," Huawei Fellow Wen Tong said in a statement. "Poisson Distribution" is a statistical model -- what it means for comms companies is that phone calls are likely to remain at a constant frequency and duration over time. This model is suitable for a hierarchical network architecture.

Internet communications follow a power-law distribution model, where most users are connected to central nodes.

And machine-to-machine communications comprises many different cases with greatly varying needs. For example, communications between connected cars requires extremely low latency. "In telemedicine, remote video systems require ultra-wide bandwidth, low latency and high reliability," Huawei says. "In this case, networks must create small systems locally and huge systems globally. The Markov distribution model supports network architecture with both distributed and centralized controls." (See How IoT Forked the Mobile Roadmap.)

These principles -- flexibility, applications driving technology and use of open components -- are hallmarks of New IP networks.

Huawei achieves the network flexibility required for the new generation of apps by dividing the network logically into the data plane, control plane and the services plane, where third-party applications such as the connected car, robotic surgery and drone control systems reside.

On the data plane (also known as the forwarding plane) the network uses programming languages such as Huawei's own Protocol Oblivious Forwarding, which is compatible with the industry standard P4, as well as OpenFlow, to forward traffic.

On the control plane, Huawei uses open protocols from ON.Lab 's ONOS andOpenDaylight to manage devices.

On the services plane, applications don't need to know anything about the underlying network complexity.


"Earlier, these kinds of things were vertically integrated into a box," says Sharma. A router contained forwarding, control and some network applications. Now the forwarding and control plane are separate devices built using standardized components and the applications reside anywhere on the network.

An example of Huawei's ADN in action is AT&T Inc. (NYSE: T)'s Central Office Re-architected as Data Center (CORD), a scalable, white-box architecture designed to deliver services more economically than the current central office set-up, incorporating multiple single-function boxes. Huawei is partnering with AT&T on CORD. (See AT&T to Show Off Next-Gen Central Office and Ciena Offers Hardened ONOS for Next-Gen Central Office Conversions.)

For another example, a connected car needs to be able to receive software upgrades, bug fixes and send reports from sensors. "They don't care whether it's over a fixed connection, WiFi, SDN or NFV. They need to be connecting, need a certain QoS, they don't care about implementation," Sharma says.


Sunday, January 10, 2016

3 Useful Wireless Technologies You Should Know About



Most of you are intimately familiar with the popular short-range wireless technologies such as Wi-Fi, Bluetooth, ZigBee, 802.15.4, and maybe even Z-Wave. All of these are addressing the Internet-of-Things (IoT) movement. But there are other, lesser-known technologies worth considering. I have identified three that justify a closer look, especially if longer range is a requirement: These are LoRaSigfox, and Weightless.

All of these are relatively new and solve the range problem for some IoT or Machine-to-Machine (M2M) applications. These technologies operate in the <1 a="" always="" for="" frequencies="" ghz="" give="" given="" level="" longer="" lower="" nbsp="" p="" power="" range="" spectrum.="" unlicensed="">
Wireless
thanks to the physics of radio waves. Where most shorter-range technologies fizzle out beyond about 10 meters, these newer technologies are good up to several miles or so.

Up until now, cellular connections have been used to implement M2M or IoT monitoring and control applications with a range greater than several hundred meters. Most cellular operators offer an M2M service, and multiple cell phone module makers can provide the hardware. But because of the technical complexity and high cost, cellular is not the best solution for a simple monitoring or control application.
These newer technologies offer the range needed at lower cost and significantly lower power consumption. This new category is known now as low power wide area networks (LPWANs). These are beginning to emerge as a significant competitor to cellular in the IoT/M2M space. Recent investigation by Beecham Research predicts that by 2020 as much as 26% of IoT/M2M coverage will be by LPWAN.

LoRa

LoRa stands for long-range radio. This technology is a product of Semtech. Typical operating frequencies are 915 MHz for the U.S., 868 MHz for Europe, and 433 MHz for Asia. The LoRa physical layer (PHY) uses a unique form of FM chirp spread spectrum, along with forward error correction (FEC), allowing it to demodulate signals 20 to 30 dB below the noise level. This gives it a huge link budget with a 20- to 30-dB advantage over a typical FSK system.
The spread spectrum modulation permits multiple radios to use the same band if each radio uses a different chirp and data rate. Data rates range from 0.03kb/s to 37.5 kb/s. Transmitter power level is 20 dBm. Typical range is 2 to 5 km, and up to 15 km is possible depending upon the location and antenna characteristics.
The media-access-control (MAC) layer is called LoRaWAN. It is IPv6 compatible. The basic topology is a star where multiple end points communicate with a single gateway, which provides the backhaul to the Internet. Maximum payload in a packet is 256 bytes. A CRC is used for error detection. Several levels of security (EUI64 and EUI128) are used to provide security. Low power consumption is a key feature.

Sigfox

Sigfox is a French company offering its wireless technology, as well as a local LPWAN for longer-range IoT or M2M applications. It operates in the 902-MHz ISM band but consumes very little bandwidth or power. Sigfox radios use a controversial technique called ultranarrowband (UNB) modulation. UNB is a variation of BPSK and supposedly produces no sidebands if zero or negative group delay filters are use in implementation. It uses only low data rates to transmit short messages occasionally. For example, Sigfox has a maximum payload separate from the node address of 12 b. A node can send no more than 140 messages per day. This makes Sigfox ideal for simple applications such as energy metering, alarms, or other basic sensor applications. Sigfox can set up a local LPWAN, then charge a low rate for a service subscription.

Weightless

Weightless is an open-LPWAN standard and technology for IoT and M2M applications. It is sponsored by the Weightless SIG and is available in several versions. The original version, Weightless-W, was designed to use the TV white spaces or unused TV channels from 54 to 698 MHz. Channels are 6-MHz wide in the U.S. and 8-MHz wide in Europe. These channels are ideal to support long range and non-line of sight transmission. The standard employs cognitive radio technology to ensure no interference to local TV signals. The basestation queries a database to see what channels are available locally for data transmission. Modulation can be simple differential BPSK up to 16 QAM with frequency hopping spread spectrum supporting data rates from about 1 kb/s to 16 Mb/s. Duplexing is time-division (TDD). Typical maximum range is about 5 to 10 km.
Weightless-N is a simpler version using DBPSK for very narrow bands to support lower data rates. Weightless-P is a newer, more robust version using either GMSK or offset-QPSK modulation. Data rates can be up to 100 kb/s using 12.5 kHz channels. Both the N and P versions work in the standard <1ghz all="" and="" authentication="" bands.="" encryption="" for="" incorporate="" ism="" p="" security.="" versions="">

Other LPWAN Options

The three technologies listed above will probably dominate LPWAN applications, but there are a few other choices. One is a variation of Wi-Fi designated by the IEEE as standard 802.11af. It was designed to operate in the TV white spaces with 6 or 8 MHz channels. The modulation is OFDM using BPSK, QPSK, 16QAM, 64QAM or 256QAM. The maximum data rate per 6-MHz channel is about 24 Mb/s. Up to four channels can be bonded to get higher rates. Data base query is part of the protocol to identify useable local channels.
A forthcoming variation is 802.11ah, a non-white space <1ghz 2016.="" addresses="" applications.="" available="" bands="" be="" consumption="" expected="" for="" in="" is="" ism="" issue="" it="" low="" p="" power="" simpler="" speed="" the="" to="" version="" wi-fi="">
Another IEEE standard identified for possible LPWAN is 802.22. This is another OFDM standard that has been around for years. Since 802.11af /ah and 802.22 use OFDM, their power consumption make them less desirable for low power applications. Their main advantage may be the higher data rates possible.



My Blog List

Networking Domain Jobs