Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Showing posts with label Latest Technologies. Show all posts
Showing posts with label Latest Technologies. Show all posts

Sunday, October 2, 2016

E-SIM for consumers—a game changer in mobile telecommunications?

Courtesy - Mckinsey


Traditional removable SIM cards are being replaced by dynamic embedded ones. What might this disruption mean for the industry?
Wearable gadgets, smart appliances, and a variety of data-sensor applications are often referred to collectively as the Internet of Things (IoT). Many of these devices are getting smaller with each technological iteration but will still need to perform a multitude of functions with sufficient processing capacity. They will also need to have built-in, stand-alone cellular connectivity. E-SIM technology makes this possible in the form of reprogrammable SIMs embedded in the devices. On the consumer side, e-SIMs give device owners the ability to compare networks and select service at will—directly from the device.

From industry resistance to acceptance

In 2011, Apple was granted a US patent to create a mobile-virtual-network-operator (MVNO) platform that would allow wireless networks to place bids for the right to provide their network services to Apple, which would then pass those offers on to iPhone customers.1Three years later, in 2014, Apple released its own SIM card—the Apple SIM. Installed in iPad Air 2 and iPad Mini 3 tablets in the United Kingdom and the United States, the Apple SIM allowed customers to select a network operator dynamically, directly from the device.
This technology gave users more freedom with regard to network selection. It also changed the competitive landscape for operators. Industry players were somewhat resistant to such a high level of change, and the pushback may have been attributable to the fact that operators so heavily relied on the structure of distribution channels and contractual hardware subsidies. In fundamentally changing the way consumers use SIM cards, Apple’s new technology was sure to disrupt the model at the time.

As a technology, e-SIM’s functionality is similar to that of Apple’s MVNO and SIM, since it also presents users with all available operator profiles. Unlike Apple’s technology, however, e-SIM enables dynamic over-the-air provisioning once a network is selected. Today, the industry is reacting much more favorably. One driver of the shift in sentiment is the recent focus on the push by the GSMA to align all ecosystem participants on a standardized reference architecture in order to introduce e-SIMs. What’s more, machine-to-machine (M2M) applications have used this architecture for built-in SIM cards for several years now with great success.
Consumer devices will require a more dynamic pull mode to request electronic profiles than the passive push mode of M2M technology. This requirement translates into a big incentive for device manufacturers and over-the-top players to support the industry-wide adoption of e-SIM standards. Finally, it is becoming increasingly clear that future consumer wearables, watches, and gadgets should ideally be equipped with stand-alone mobile-network connectivity. Together, these developments have contributed to strong industry support from mobile operators for the GSMA’s Remote SIM Provisioning initiative.
As a result of both the strong growth in the number of M2M and IoT devices and the development of consumer e-SIM specifications by the GSMA, the distribution of e-SIMs is expected to outgrow that of traditional SIM cards over the next several years by a large margin (Exhibit 1).

The GSMA is expected to present the outcome of ongoing alignment negotiations later in 2015. The association announced that “with the majority of operators on board, the plan is to finalize the technical architecture that will be used in the development of an end-to-end remote SIM solution for consumer devices, with delivery anticipated by 2016.”2

Architecture and access

The future standard will most likely require a new or nonprovisioned device to connect to an online service (for example, an e-SIM profile-discovery server) to download an operator profile to the handset. Final details on the e-SIM operating model—including the required components for a provisioning architecture—are being finalized by OEMs, network operators, SIM vendors, and the GSMA.
While no change to the current environment is expected for most of the architecture components, the industry group needs to agree on a solution for how the online discovery service will establish the initial connection between the handset and the profile-generating units. Independent ownership is preferred from a consumer perspective to ensure that all available operator profiles (and tariffs) are made available for selection without the need to state a preference for a specific provider. Enabling over-the-air provisioning of operator profiles requires a standardized architecture with agreed-upon interfaces and protocols across all ecosystem participants.
The use of consumer e-SIMs means that the chipset manufacturers will negotiate with hardware OEMs such as Apple and Samsung directly, and the industry value chain might be reconfigured. The manufacturing and distribution of physical SIM cards becomes (partially) obsolete, although preconfiguration and profile-handling services already form a significant part of the value created for traditional SIM-card vendors. Physical SIM cards, however, are not expected to disappear from the market within the next few years. Instead, a relatively long phase of parallelism between existing SIM technology and the new standard is expected. Countless existing devices will still have to be served continuously, and developing markets, in particular, will have long usage cycles of basic, traditional SIM phones and devices.
Depending on the outcome of the GSMA’s ongoing alignment negotiations, the resulting architecture recommendation might require a slight update of the model described in Exhibit 2, but it is quite likely that the following components will be present.

Profile-generation unit. E-SIM profile generation will take place via the same processes used for SIM profile development. SIM vendors will use authentication details provided by network operators to generate unique network access keys. Rather than storing these details on physical SIM chips, they will be saved in digital form only and will await a request for download triggered by the embedded universal integrated circuit card (e-UICC) in the consumer’s handset.
Profile-delivery unit. The connection between the e-UICC in the device and the profile-generation service is established by the profile-delivery unit, which is responsible for encrypting the generated profile before it can be transmitted to the device. While theoretically, all participants in the new e-SIM ecosystem could operate the profile-delivery service, those most likely to do so will be either the SIM vendors or the mobile network operators (MNOs)—physical and virtual—themselves.
Universal-discovery (UD) server. The UD server is a new key component in the e-SIM architecture; previously, it was not required for provisioning physical SIM cards or M2M e-SIM profiles. In a consumer e-SIM environment, customers will obtain either a device that is not associated with an operator or one that has been preprovisioned. In the former case, they will be required to select a provider, and in the latter, they may have the option to do so. In both cases, the UD plays a pivotal role, as it is responsible for establishing the link between the device and the profile-provisioning units. Consumers would most likely prefer that an independent party be responsible for operator-profile discovery to ensure that all available profiles in a market (with no restrictions on tariffs and operators) are presented without commercial bias.
A possible alternative to a separate UD server might be a model similar to today’s domain-name-server (DNS) service. This would provide the same level of objectivity as the UD, but it would require more intensive communication between all involved servers to ensure that each provides comprehensive profile information.

Stakeholder advantages in an e-SIM environment

Adoption of e-SIM as the standard across consumer devices brings several advantages for most stakeholders in the industry: IoT-enabled product manufacturers (for example, connected-car or wearables manufacturers) would have the ability to build devices with “blank” SIMs that could be activated in the destination country. This functionality would make for easy equipment connectivity and allow manufacturers to offer new products in new market segments.
By adopting e-SIM technology, mobile network operators can benefit from the opportunity to take a leading role in the IoT market. They would also have the ability to provide convergent offers with multiple devices (for instance, the smart car and smart watch) under a single contract with the consumer more conveniently than they would using physical SIM cards.
Consumers benefit from the network-selection option that embedded connectivity technology provides. The ability to change providers easily means that e-SIM customers don’t have to carry multiple SIMs, have full tariff transparency, and can more easily avoid roaming charges.
Mobile-device manufacturers may be able to take control of the relationship with the customer because e-SIM, at least technically, allows for disintermediation of network operators from the end-to-end relationship. E-SIM also frees up valuable device “real estate,” which gives manufacturers an opportunity to develop even more features using the space once occupied by traditional SIM cards.
SIM vendors don’t lose in the e-SIM scenario either. Their competency in security and profile creation positions them to be valuable players in the new ecosystem. Key architecture activities, such as managing the e-SIM-generation service, are among the roles that SIM vendors are uniquely qualified to take on.

E-SIM’s potential impact on channels and operating models

Most network operators have already started initiatives to explore not only the impact of the architectural requirements on the organization—including changes to existing IT systems and processes—but also the potential effect on channels, marketing, and proposition building.
Marketing and sales. Targeting new clients through promotional activities may be as easy as having them sign up by scanning the bar code of a print advertisement and activating the service immediately—without ever meeting a shop assistant or getting a new SIM card. By conveniently adding secondary devices such as e-SIM-enabled wearables and other IoT gadgets to a consumer’s main data plan, operators might improve take-up rates for those services. On the other hand, the ease of use and ease of operator switching has the potential to weaken the network operator’s position in the mobile value chain, as customers may demand more freedom from contractual lock-ins, as well as more dynamic contractual propositions.
Customer touchpoints. The entire customer journey and in-store experience may also be affected. For example, e-SIM eliminates the need for customers to go to a store and acquire a SIM card when signing up for service. Since face-to-face, in-store interactions are opportunities to influence customer decisions, operators will need to assess the potential impact of losing this customer touchpoint and consider new ways to attract customers to their sales outlets.
Logistics. Many services will need to be redesigned, and customer-service and logistics processes will be widely affected. For example, secure communication processes for profile-PIN delivery will be required.
Churn and loyalty. The customer may be able to switch operators and offers (the prepaid client base, at least) more easily, and short-term promotions may trigger network switching. This means that churn between operators in a strong prepaid ecosystem will likely increase. But this does not necessarily mean that a customer who isn’t locked into a contract will churn networks more often or spend less. Consumers may still prioritize a deal that offers a superior user experience and acceptable call quality. Satisfied clients will likely stay with their operator as long as locked-in customers do.
Prepaid versus contract markets. E-SIM’s impact may be greater in markets with more prepaid customers than in markets with a high share of subsidized devices. While device-subsidy levels will remain an important driver of customer loyalty in developed markets, investment in device subsidization is expected to fall dramatically over the next couple of years—from approximately 20 percent of all devices sold to less than 8 percent in 2020 (Exhibit 3).

Disruptive business models enabled by e-SIM

Recently, a number of new business models have developed around e-SIMs. Specifically, dynamic brokerage and potential spot-price platform markets are piquing the interest of the mobile community.
Wholesale service provision. Wholesalers contracting with several network operators in a market could offer a tariff selection without disclosing which network is providing the connectivity. The customer could then be “auctioned” dynamically among network operators for a period of time. Electronic profiles could even be switched among operators seamlessly for the client.
Social-media and Internet-content service providers. The voice services that social-media platforms offer rely on available Wi-Fi connectivity to provide voice services either entirely via a data connection or by using a temporary connection to a cellular network. Call quality depends in part on the seamless switching between those connectivity avenues, and e-SIMs would facilitate smoother “handovers” with dynamic (and automatic) operator selection.
One of the potentially highest-impact and most disruptive new ventures of this type is surely Google’s Project Fi, an MVNO offer, recently launched in the United States, that strives to provide the best available data-network performance on mobile devices by combining mobile data and Wi-Fi connectivity. The decision regarding which network to connect to will be based on the fastest available speed and bandwidth. Additionally, social-media voice services mean that mobile-phone numbers are no longer the only unique client identifiers. A user’s online communication account (for example, Hangouts) is enough to set up a phone call.
New pricing schemes. While most operators already provide mobile Internet telephony services, technically referred to as voice over IP (VoIP) or voice over LTE (VoLTE), many operator tariff schemes still have potential for disruption on the commercial side. In addition to offering competitive rates, new players may further increase margin pressure by including refunds of unused, prepaid minutes in their pricing models. For advertising-centric players or social-media companies entering the MVNO market, the advertising value or additional call-behavior data may even lead to cross-subsidizing offerings in the short term.
Global roaming services. Last but not least, other players are primarily targeting the still-expensive global data-roaming market for end users. Strong global brand power paired with the technology of reprogrammable e-SIMs—supporting over-the-air provisioning of multiple electronic user profiles of global operators—can be turned into easy-to-use offers for global travelers. These transparently priced global roaming services will allow users to choose a local network with a few clicks on the device. Current global roaming offers based on reprogrammable SIMs are priced near the upper end of the market, but providers in emerging markets may soon offer similar services and more competitive global tariff schemes.

The GSMA is working with global network operators to develop a standardized reference architecture for the implementation of e-SIM technology. The process under way may lead to widespread industry adoption of e-SIMs in the very near future.
New entrants and new sales and service models will drive e-SIM’s impact on the mobile-telecommunications market in the next two to five years. Revenue is at stake, and operators’ approaches to new propositions, shared data-tariff portfolios, potential new revenue streams, and handset-subsidy strategies across multiple markets will play a big role in how they fare in the new e-SIM ecosystem. Whether an operator decides to take a role as a smart follower or an early mover, an overall strategy and associated initiatives need to be shaped thoughtfully now.

Wednesday, June 8, 2016

Cognitive Computing


https://en.wikipedia.org/wiki/Cognitive_computing
http://fortune.com/2016/02/25/ibm-sees-better-days-ahead/

*****

Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works. The goal of cognitive computing is to create automated IT systems that are capable of solving problems without requiring human assistance.

Cognitive computing systems use machine learning algorithms. Such systems continually acquire knowledge from the data fed into them by mining data for information. The systems refine the way they look for patterns and as well as the way they process data so they become capable of anticipating new problems and modeling possible solutions.

Cognitive computing is used in numerous artificial intelligence (AI) applications, including expert systems, natural language programming, neural networks, robotics and virtual reality. The term cognitive computing is closely associated with IBM’s cognitive computer system,Watson.

*****

Artificial intelligence has been a far-flung goal of computing since the conception of the computer, but we may be getting closer than ever with new cognitive computing models.

Cognitive computing comes from a mashup of cognitive science — the study of the human brain and how it functions — and computer science, and the results will have far-reaching impacts on our private lives, healthcare, business, and more.

What is cognitive computing?

The goal of cognitive computing is to simulate human thought processes in a computerized model. Using self-learning algorithms that use data mining, pattern recognition and natural language processing, the computer can mimic the way the human brain works.

While computers have been faster at calculations and processing than humans for decades, they haven’t been able to accomplish tasks that humans take for granted as simple, like understanding natural language, or recognizing unique objects in an image.

Some people say that cognitive computing represents the third era of computing: we went from computers that could tabulate sums (1900s) to programmable systems (1950s), and now to cognitive systems.

These cognitive systems, most notably IBM IBM +0.33%’s Watson, rely on deep learning algorithms and neural networks to process information by comparing it to a teaching set of data.  The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex “tree” of decisions the computer can make to arrive at an answer.

What can cognitive computing do?

For example, according to this TED Talk video from IBM, Watson could eventually be applied in a healthcare setting to help collate the span of knowledge around a condition, including patient history, journal articles, best practices, diagnostic tools, etc., analyze that vast quantity of information, and provide a recommendation.

The doctor is then able to look at evidence-based treatment options based on a large number of factors including the individual patient’s presentation and history, to hopefully make better treatment decisions.

In other words, the goal (at this point) is not to replace the doctor, but expand the doctor’s capabilities by processing the humongous amount of data available that no human could reasonably process and retain, and provide a summary and potential application.

This sort of process could be done for any field in which large quantities of complex data need to be processed and analyzed to solve problems, including finance, law, and education.

These systems will also be applied in other areas of business including consumer behavior analysis, personal shopping bots, customer support bots, travel agents, tutors, security, and diagnostics.  Hilton Hotels recently debuted the first concierge robot, Connie, which can answer questions about the hotel, local attractions, and restaurants posed to it in natural language.

The personal digital assistants we have on our phones and computers now (Siri and Google GOOGL +1.57% among others) are not true cognitive systems; they have a pre-programmed set of responses and can only respond to a preset number of requests.  But the time is coming in the near future when we will be able to address our phones, our computers, our cars, or our smart houses and get a real, thoughtful response rather than a pre-programmed one.

As computers become more able to think like human beings, they will also expand our capabilities and knowledge. Just as the heroes of science fiction movies rely on their computers to make accurate predictions, gather data, and draw conclusions, so we will move into an era when computers can augment human knowledge and ingenuity in entirely new ways.

Sunday, January 10, 2016

3 Useful Wireless Technologies You Should Know About



Most of you are intimately familiar with the popular short-range wireless technologies such as Wi-Fi, Bluetooth, ZigBee, 802.15.4, and maybe even Z-Wave. All of these are addressing the Internet-of-Things (IoT) movement. But there are other, lesser-known technologies worth considering. I have identified three that justify a closer look, especially if longer range is a requirement: These are LoRaSigfox, and Weightless.

All of these are relatively new and solve the range problem for some IoT or Machine-to-Machine (M2M) applications. These technologies operate in the <1 a="" always="" for="" frequencies="" ghz="" give="" given="" level="" longer="" lower="" nbsp="" p="" power="" range="" spectrum.="" unlicensed="">
Wireless
thanks to the physics of radio waves. Where most shorter-range technologies fizzle out beyond about 10 meters, these newer technologies are good up to several miles or so.

Up until now, cellular connections have been used to implement M2M or IoT monitoring and control applications with a range greater than several hundred meters. Most cellular operators offer an M2M service, and multiple cell phone module makers can provide the hardware. But because of the technical complexity and high cost, cellular is not the best solution for a simple monitoring or control application.
These newer technologies offer the range needed at lower cost and significantly lower power consumption. This new category is known now as low power wide area networks (LPWANs). These are beginning to emerge as a significant competitor to cellular in the IoT/M2M space. Recent investigation by Beecham Research predicts that by 2020 as much as 26% of IoT/M2M coverage will be by LPWAN.

LoRa

LoRa stands for long-range radio. This technology is a product of Semtech. Typical operating frequencies are 915 MHz for the U.S., 868 MHz for Europe, and 433 MHz for Asia. The LoRa physical layer (PHY) uses a unique form of FM chirp spread spectrum, along with forward error correction (FEC), allowing it to demodulate signals 20 to 30 dB below the noise level. This gives it a huge link budget with a 20- to 30-dB advantage over a typical FSK system.
The spread spectrum modulation permits multiple radios to use the same band if each radio uses a different chirp and data rate. Data rates range from 0.03kb/s to 37.5 kb/s. Transmitter power level is 20 dBm. Typical range is 2 to 5 km, and up to 15 km is possible depending upon the location and antenna characteristics.
The media-access-control (MAC) layer is called LoRaWAN. It is IPv6 compatible. The basic topology is a star where multiple end points communicate with a single gateway, which provides the backhaul to the Internet. Maximum payload in a packet is 256 bytes. A CRC is used for error detection. Several levels of security (EUI64 and EUI128) are used to provide security. Low power consumption is a key feature.

Sigfox

Sigfox is a French company offering its wireless technology, as well as a local LPWAN for longer-range IoT or M2M applications. It operates in the 902-MHz ISM band but consumes very little bandwidth or power. Sigfox radios use a controversial technique called ultranarrowband (UNB) modulation. UNB is a variation of BPSK and supposedly produces no sidebands if zero or negative group delay filters are use in implementation. It uses only low data rates to transmit short messages occasionally. For example, Sigfox has a maximum payload separate from the node address of 12 b. A node can send no more than 140 messages per day. This makes Sigfox ideal for simple applications such as energy metering, alarms, or other basic sensor applications. Sigfox can set up a local LPWAN, then charge a low rate for a service subscription.

Weightless

Weightless is an open-LPWAN standard and technology for IoT and M2M applications. It is sponsored by the Weightless SIG and is available in several versions. The original version, Weightless-W, was designed to use the TV white spaces or unused TV channels from 54 to 698 MHz. Channels are 6-MHz wide in the U.S. and 8-MHz wide in Europe. These channels are ideal to support long range and non-line of sight transmission. The standard employs cognitive radio technology to ensure no interference to local TV signals. The basestation queries a database to see what channels are available locally for data transmission. Modulation can be simple differential BPSK up to 16 QAM with frequency hopping spread spectrum supporting data rates from about 1 kb/s to 16 Mb/s. Duplexing is time-division (TDD). Typical maximum range is about 5 to 10 km.
Weightless-N is a simpler version using DBPSK for very narrow bands to support lower data rates. Weightless-P is a newer, more robust version using either GMSK or offset-QPSK modulation. Data rates can be up to 100 kb/s using 12.5 kHz channels. Both the N and P versions work in the standard <1ghz all="" and="" authentication="" bands.="" encryption="" for="" incorporate="" ism="" p="" security.="" versions="">

Other LPWAN Options

The three technologies listed above will probably dominate LPWAN applications, but there are a few other choices. One is a variation of Wi-Fi designated by the IEEE as standard 802.11af. It was designed to operate in the TV white spaces with 6 or 8 MHz channels. The modulation is OFDM using BPSK, QPSK, 16QAM, 64QAM or 256QAM. The maximum data rate per 6-MHz channel is about 24 Mb/s. Up to four channels can be bonded to get higher rates. Data base query is part of the protocol to identify useable local channels.
A forthcoming variation is 802.11ah, a non-white space <1ghz 2016.="" addresses="" applications.="" available="" bands="" be="" consumption="" expected="" for="" in="" is="" ism="" issue="" it="" low="" p="" power="" simpler="" speed="" the="" to="" version="" wi-fi="">
Another IEEE standard identified for possible LPWAN is 802.22. This is another OFDM standard that has been around for years. Since 802.11af /ah and 802.22 use OFDM, their power consumption make them less desirable for low power applications. Their main advantage may be the higher data rates possible.



Saturday, January 9, 2016

Examining The Future Of WiFi: 802.11ah, 802.11ad (& Others)



In just 15 years, WiFi has evolved from sluggish connections to an incredibly versatile connective technology. And because it plays an integral role in the lives of hundreds of millions of people, it is being improved almost constantly. But what are those big changes? And what will these new technologies bring about in upcoming years? Consumers and companies are looking for two things in particular: incredible range and extreme speed.
Within this article, we’ll give a brief explanation on IEEE protocols and standards and a history of the 802.11 family. We’ll also take a look at three up-and-coming wireless network options:
  • 802.11ah: for low data rate, long-range sensors and controllers.
  • 802.11af: for similar applications to 802.11ah. This network option relies on unused TV spectrums instead of 2.4 GHz or 5 GHz bands for transmission.
  • 802.11ad: for multigigabit speeds (sans wires) and high-performance networking.

A Brief Overview Of IEEE Standards

The Institute of Electronics and Electronics Engineers (IEEE) is a professional association that acts as an authority for electronic communication. The IEEE creates standards and protocols for communication in industries like telecommunications, information technology, and much more.  Each standard that the IEEE ratifies is designated by a unique number. 802 is the prefix used for any protocol or amendment that entails area networking. For instance, standards for ethernet local area networks (LANs) are designated by 802.3, and Bluetooth personal area networks (PANs) are designated by 802.15. Wireless LANs—the subject of this article—are designated by 802.11.
In 1997, the IEEE released the base standard for wireless local area network (WLAN) communications, which they called called 802.11. In the years following, many amendments were made to this standard. Let’s walk through what each standard has brought to communications.

A History Of Past & Current 802.11 Amendments




802.11a (1990): “WiFi A”—also known as the OFDM (Orthogonal, Frequency Division Multiplexing) waveform—was the first amendment, and it came two years after the standard was complete. This amendment defined 5 gigahertz band extensions, which made it more flexible (since the 2.4 GHz space was crowded with wireless home telephones, baby monitors, microwaves, and more).
802.11b (2000): As one of the first widely used protocols, “WiFi B” had an improved range and transfer rate, but it is very slow by today’s standards (maxing out at 11 mbps). 802.11b defined 2.4 GHz band extensions. This protocol is still supported (since 80% of WiFi runs off of 2.4 GHz), but the technology isn’t manufactured anymore because it’s been replaced by faster options.
802.11g (2003): “WiFi G” came onto the market three years after B, and it offered roughly five times the transfer rate (at 54 mbps). It defined 2.4 GHz band extensions at a higher data rate. The primary benefit it offered was greater speed, which was increasingly important to consumers. Today, these speeds are not fast enough to keep up with the average number of WiFi-enabled devices in a household or a strong wireless draw from a number of devices.
802.11n (2007): “WiFi N” offered another drastic improvement in transfer rate speed—300-450 mbps, depending on the number of antennas—and range. This was the first main protocol that operated on both 2.4 GHz and 5 GHz. These transfer rates allow large amounts of data to be transmitted more quickly than ever before.
802.11ac (2013): In 2013, “WiFi AC” was introduced. AC was the first step in what is considered “Gigabit WiFi,” meaning it offers speeds of nearly 1 gbps, which is equivalent to 800 mbps. That’s roughly 20 times more powerful than 802.11n, making this an important (and widely used) new protocol. AC runs on a 5 GHz band, which is important—because it’s less widely used, you’ll have an advantage as far as high online speeds are concerned, though the higher frequency and higher modulation rate mean the range is more limited

“Future” WiFi Technologies

802.11AH

802.11ah is 900 megahertz WiFi, which is ideal for low power consumption and long-range data transmission. It’s earned the nickname “the low power WiFi” for that very reason.
Who will use it: Companies who have sensor-level technology that they need to be WiFi-enabled.
Benefits:
  • Can penetrate through walls and obstructions better than high frequency networks like 802.11ad, which we’ll discuss below.
  • Great for short, bursty data that doesn’t use a good deal of power consumption and needs to travel long distances. This would be applicable in smart building applications, like smart lightingsmart HVAC, and smart security systems. It would also work for smart city applications, like parking garages and parking meters.
Downfalls:
  • There is no global standard for 900 MHz. Right now, 80% of the world uses 2.4 GHz WiFi. That is a benefit because you can connect on these global standard bands anywhere in the world. (If you’re on a Mac, try this: hold down the option key and click your WiFi symbol at the top. You’ll see a bunch of information about the WiFi network you’re connected to, including channel.)
  • AH isn’t available right now. The IEEE is in the final phases of resolving the standard, and once that’s done—currently slated for March 2016—the chip manufacturers (like HUAWEI, Broadcom, and Qualcomm) will have a chance to start creating physical layer chips. You will most likely start seeing WiFi AH products appear in the next 18 months to two years. The good news, however, is that organizations are providing similar technology for low power, wide-area networks (LPWAN) now, so you don’t have to wait until 802.11ah is complete to benefit from the technology.

802.11.AF

802.11af utilizes unused television spectrum frequencies (i.e., white spaces) to transmit information. Because of this, it’s earned the nickname “White-Fi.” Because these frequencies are between 54 MHz and 790 MHz, AF can be used for low power, wide-area range, like AH.
Who will use it:
  • Organizations that need extremely long-range wireless networks.
  • Lower interference can drastically improve performance.
Benefits:
  • Because AF can use several unused TV channels at once, it can be used for very long range devices—potentially up to several miles, with high data rates.
Downfalls:
  • It’s still in proposal stages, so it hasn’t been approved or released to the mass market yet.
  • “White space” channels are not available everywhere, like in big cities.

 802.11AD

802.11ad couldn’t be further from AH. While AH is a future LPWAN option, AD is ideal forvery high data rate, very short range communications.
AD WiFi—previously known as WiGig because of it’s predecessor 802.11ac—separates itself from the 2.4 GHz and 5 GHz bands and operates on a 60 GHz band. This space is completely free and open, which helps it achieve speeds that are 50 times faster than WiFi N. And while AH uses 900 MHz, AD uses 60 GHz. To put that into perspective, 60 GHz is equivalent to 60,000 MHz.
Who will use it:
  • Enterprise-level organizations that need extended bandwidth with very short-range devices.
Benefits:
  • Very good for high data rate, short-range file transfers and communication.Back in 2007 when 802.11n was introduced, it was regarded as the fastest protocol yet. At 8 gbps, AD is 50 times faster than WiFi N. In fact, this protocol is so fast that, according to this Fast Company article, AD has the potential to “enable a whole new class of devices” like “wireless hard drives that feel as fast as locally connected ones.”
Downfalls:
  • The chips are very expensive to manufacture, which makes this a costly set up.
  • AD provides a very short range. When you have a really high frequency like 60 GHz, short-range communications are ideal. This isn’t a problem if you have the router right next to you, but if you need it to penetrate walls, you’ll need additional routers.
  • AD (which operates on a 60 GHz band) is not a recognized international standard. This is also a downside for AH.

Conclusion

AH (low data rate, long-range sensors and controller WiFi), AF (or “White-Fi, as it uses unused TV spectrums for long-range transmission), and AD (the non-wired multigigabit high-performance networking WiFi) are three important up-and-coming changes to WiFi as we know it.
These three amendments are clear evidence that WiFi has undergone a spectacular transformation in the past decade and a half. And with the IEEE reviewing amendments to the 802.11 protocol on a near regular basis, we’re certain that the next 15 years will hold just as many interesting changes.


Sunday, September 27, 2015

10 TED Talks for techies


These inspiring, sometimes frightening presentations detail how technologies from bionics to big data to machine learning will change our world for good or ill -- and sooner than you might think.

Topics range from bionics, virtual reality and facial recognition to driverless cars, big data and the philosophical implications of artificial intelligence.


Hugh Herr: New bionics let us run, climb and dance

Hugh Herr is a bionics designer at MIT who creates bionic extremities that emulate the function of natural limbs. A double leg amputee, Herr designed his own bionic legs -- the world's first bionic foot and calf system called the BiOM.

Herr's inspirational and motivational talk depicts the innovative ways that computer systems can be used in tandem with artificial limbs to create bionic limbs that move and act like flesh and bone. "We want to close the loop between the human and the bionic external limb," he says. The talk closes with a moving performance by ballroom dancer Adrianne Haslet-Davis, who lost her left leg in the 2013 Boston Marathon bombings. She dances beautifully wearing a bionic leg designed by Herr and his colleagues.





Chris Milk: How virtual reality can create the ultimate empathy machine

This inspiring talk details how Chris Milk turned from an acclaimed music video director who wanted to tell emotional stories of the human condition into an experiential artist who does the same via virtual reality. He worked with the United Nations to make virtual reality films such as "Clouds Over Sidra," which gives a first-person view of the life of a Syrian refugee living in Jordan, so that U.N. workers can better understand how their actions can impact people's lives around the world.

Milk notes, "[Virtual reality] is not a video game peripheral. It connects humans to other humans in a profound way that I've never seen before in any other form of media... It's a machine, but through this machine, we become more compassionate, we become more empathetic, and we become more connected -- and ultimately, we become more human."





Topher White: What can save the rainforest? Your used cell phone

Topher White is a conservation technologist who started the Rainforest Connection, which uses recycled cell phones to monitor and protect remote areas of rainforests in real time. His extraordinary talk revolves around his 2011 trip to a Borneo gibbon reserve. He discovered that illegal logging was rampant in the area, but the sounds of animals in the rainforest were so loud that the rangers couldn't hear the chainsaws over the natural cacophony.

Resisting the urge to develop an expensive high-tech solution, White turned to everyday cell phones, encased in protective boxes and powered by solar panels. The devices are placed high in the trees and programmed to listen for chainsaws. If a phone hears a chainsaw, it uses the surprisingly good cellular connectivity in the rainforest to send the approximate location to the cell phones of rangers on the ground, who can then stop the illegal logging in the act. Through this means, White's startup has helped stop illegal logging and poaching operations in Sumatra, and the system is being expanded to rainforest reserves in Indonesia, Brazil and Africa.



Fei-Fei Li: How we teach computers to understand pictures

An associate professor of computer science at Stanford University, Fei-Fei Li is the director of Stanford's Artificial Intelligence Lab and Vision Lab, where experiments exploring how human brains see and think inform algorithms that enable computers and robots to see and think.

In her talk, Li details how she founded ImageNet, a service that has downloaded, labeled and sorted through a billion images from the Internet in order to teach computers how to analyze, recognize and label them via algorithms. It may not sound like much, but it's a vital step on the road to truly intelligent machines that can see as humans do, inherently understanding relationships, emotions, actions and intentions at a glance.





Pia Mancini: How to upgrade democracy for the Internet era

Argentine democracy activist Pia Mancini hopes to use software to inform voters, provide a platform for public debate and give citizens a voice in government decisions. She helped launch an open-source mobile platform called DemocracyOS that's designed to provide citizens with immediate input into the legislative process.

In her talk, Mancini suggests that the 18th-century democratic slogan "No taxation without representation" should be updated to "No taxation without a conversation" for the modern age. She poses the question, "If the Internet is the new printing press, then what is democracy for the Internet era?" Although it took some convincing, Mancini says, the Argentine Congress has agreed to discuss three pieces of legislation with citizens via DemocracyOS, giving those citizens a louder voice in government than they've ever had before.




Kenneth Cukier: Big data is better data

As data editor for The Economist and coauthor of Big Data: A Revolution That Will Transform How We Live, Work, and Think, Kenneth Cukier has spent years immersed in big data, machine learning and the impact both have had on society. "More data doesn't just let us see more," he says in his talk. "More data allows us to see new. It allows us to see better. It allows us to see different."

The heart of Cukier's talk focuses on machine learning algorithms, from voice recognition and self-driving cars to identifying the most common signs of breast cancer, all of which are made possible by a mind-boggling amount of data. But along with his clear enthusiasm for big data and intelligent machines, he sounds a note of caution: "In the big data age, the challenge will be safeguarding free will, moral choice, human volition, human agency." Like fire, he says, big data is a powerful tool -- one that, if we're not careful, will burn us.



Rana el Kaliouby: This app knows how you feel — from the look on your face

Technology has been blamed for lessening social and emotional connections among millennials, but what if it could sense emotion? In this talk, computer scientist Rana el Kaliouby, cofounder and chief strategy & science officer of Affectiva, outlines her work designing algorithms for an application used on mobile phones, tablets and computers that can read people's faces and recognize positive and negative emotions.

What good is that? el Kaliouby gives a few examples: Wearable glasses armed with emotion-sensing software could help autistic children or the visually impaired recognize particular emotions in others. A learning app could sense that the learner is confused or bored, and slow down or speed up accordingly. A car could sense a driver's fatigue and send an alert. "By humanizing technology," el Kaliouby concludes, "we have this golden opportunity to reimagine how we connect with machines, and therefore how we, as human beings, connect with one another."



Chris Urmson: How a driverless car sees the road

In this talk, roboticist Chris Urmson cites some of the dangers drivers face -- inclement weather; distractions that include answering phone calls, texting and setting the GPS; flawed, careless drivers -- as well as the staggering amount of time wasted each day by drivers stuck in traffic. The solution? Not surprisingly, Urmson, who has headed up Google's self-driving car project since 2009, says autonomous cars are the answer.

Urmson shows how driverless cars see and understand their environment -- the layout of the roads and intersections, other vehicles, pedestrians, bicyclists, traffic signs and signals, construction obstacles, special presences such as police and school buses, and so on -- and decide what action to take based on a vast set of behavioral models. It's a fascinating car's-eye look at the world.




Jeremy Howard: The wonderful and terrifying implications of computers that can learn

Jeremy Howard, a data scientist, CEO of advanced machine learning firm Enlitic and data science professor at Singularity University, imagines how advanced machine learning can improve our lives. His talk explores deep learning, an approach to enabling computers to teach themselves new information via set algorithms. A bit lengthy but fascinating, Howard's talk outlines different ways computers can teach themselves by "seeing," "hearing" and "reading."




Nick Bostrom: What happens when our computers get smarter than we are?

With a background in physics, computational neuroscience, mathematical logic and philosophy, Nick Bostrom is a philosophy professor at Oxford University and author of the book Superintelligence: Paths, Dangers, Strategies. He is also the founding director of the Future of Humanity Institute, a multidisciplinary research center that drives mathematicians, philosophers and scientists to investigate the human condition and its future.

This metaphyshical discussion, reminiscent of a college philosophy course, explores how older A.I., programmed by code, has evolved into active machine learning. "Rather than handcrafting knowledge representations and features," Bostrom says, "we create algorithms that learn from raw perceptual data." In other words, machines can learn in the same ways that children do.

Bostrom theorizes that A.I. will be the last invention that humanity will need to make, and eventually machines will be better at inventing than humans -- which may leave us at their mercy as they decide what to invent next. A solution to control A.I., he suggests, is to make sure it shares human values rather than serving only itself




Friday, May 22, 2015

Microsoft’s HoloLens Will Put Realistic 3-D People in Your Living Room



Demonstrations of augmented-reality displays typically involve tricking you into seeing animated content such as monsters and robots that aren’t really there. Microsoft wants its forthcoming HoloLens headset to mess with reality more believably. It has developed a way to make you see photorealistic 3-D people that fit in with the real world.

With this technology, you could watch an acrobat tumble across your front room or witness your niece take some of her first steps. You could walk around the imaginary people just as if they were real, your viewpoint changing seamlessly as if they were actually there. A sense of touch is just about the only thing missing.

That experience is possible because Microsoft has built a kind of holographic TV studio at its headquarters in Redmond, Washington. Roughly 100 cameras capture a performance from many different angles. Software uses the different viewpoints to create a highly accurate 3-D model of the person performing, resulting in a photo-real appearance.

The more traditional approach of using computer animation can’t compare, according to Steve Sullivan, who works on the project at Microsoft. He demonstrated what Microsoft calls “video holograms” at the LDV Vision Summit, an event about image-processing technology, in New York on Tuesday. More details of the technology will be released this summer.

“There’s something magical about it being real people and motion,” he said. “If you have a HoloLens, you really feel these performances are in your world.”

Microsoft is working on making it practical and cheap enough for other companies to record content in this form. It might one day be possible to visit a local studio and record a 3-D snapshot of a child at a particular point in life, said Sullivan.

Demonstrations of the technology included holographic videos of theatrical and acrobatic performances. Sullivan also showed how the format could be used in sports instruction. Someone looking for help with golf technique, for example, could wear a HoloLens to examine recordings of a pro golfer swinging different clubs at full or reduced speed, viewing the instructor up close and from different vantage points.

Microsoft has also recorded catwalk models using its system. That could help Internet shoppers by showing them how an item of clothing looks and hangs more realistically than is possible with still photos or 2-D video, said Sullivan. It should also be possible to use captured performances as the basis for animated characters in games or other applications, he said.

HoloLens uses a novel holographic display technology that can trick the eye into perceiving 3-D objects more effectively than conventional stereoscopic displays (see “Microsoft Making Fast Progress with HoloLens”). Sensors in the headset allow the device to figure out how to present virtual objects so they fit in with the real world. Sullivan showed how holographic videos can also be played back in 2-D on a tablet, in a special player that lets you drag your fingers to change your viewpoint on the action.

Several companies are working on ways to capture live action such as sports or movies for viewing on more conventional 3-D headsets like the Oculus Rift. But they capture only the 3-D view from the position of the camera at the time of shooting. Devices like the Rift also use a less sophisticated method of tricking the brain into perceiving 3-D objects, and they cannot mix virtual content with the real world.

A startup called Magic Leap, backed by Google, is developing its own wearable augmented-reality device based on display technology that’s similar to Microsoft’s (see “10 Breakthrough Technologies 2015: Magic Leap”). So far, however, Magic Leap’s demonstrations of its technology have involved animated content, not live action recorded in three dimensions.


Monday, March 23, 2015

Will TV Viewing Habits Change Metro Architecture?


According to a couple recent surveys, TV viewing is dropping in the 18-34 year old age group.  Some are already predicting that this will mean the end of broadcast TV, cable, and pretty much the media World as We Know It.  Certainly there are major changes coming, but the future is more complicated than the “New overtakes the Old” model.  It’s really dependent on what we could call lifestyle phases, and of course it’s really complicated.  To make things worse, video could impact metro infrastructure planning as much as NFV could, and it’s also perhaps the service most at risk to being itself impacted by regulatory policy.  It’s another of those industry complications, perhaps one of the most important.

Let’s start with video and viewing changes, particularly mobile broadband.  “Independence” is what most young people crave.  They start to grow up, become more socially aware, link with peer groups that eventually influence them more than their parents do.  When a parent says “Let’s watch TV” to their kids, the kids hear “Stay where I can watch you!”  That’s not an attractive option, and so they avoid TV because they’re avoiding supervision.  This was true fifty years ago and it’s still true.

Kids roaming the streets or hanging out in Starbucks don’t have a TV there to watch, and mobile broadband and even tablets and WiFi have given them an alternative entertainment model, which is streaming video.  So perhaps ten years ago, we started to see youth viewing behavior shift because technology opened a new viewing option that fit their supervision-avoidance goal.

Few people will watch a full hour TV show much less a movie on a mobile device.  The mobile experience has to fit into the life of people moving, so shorter clips like music videos or YouTube’s proverbial stupid pet tricks caught on.  When things like Facebook and Twitter came along, they reinforced the peer-group community sense, and they also provided a way of sharing viewing experiences through a link.

Given all this, it’s hardly surprising that youth has embraced streaming.  So what changes that?  The same thing that changes “youth”, which is “aging”.  Lifestyles march on with time.  The teen goes to school, gets a job and a place to live, enters a partner relationship, and perhaps has kids of his/her own.

Fast forward ten years.  Same “kid” now doesn’t have to leave “home” to avoid supervision, but they still hang out with friends and they still remember their streaming habits.  Stupid pet tricks seem a bit more stupid, and a lot of social-media chatter can interfere with keying down after a hard day at the office.  Sitting and “watching TV” seems more appealing.  My own research says that there’s a jump in TV viewing that aligns with independent living.

Another jump happens two or three years later when the “kid” enters a stable partner relationship.  Now that partner makes up a bigger part of life, the home is a better place to spend time together, and financial responsibilities are rising and creating more work and more keying down.  There’s another jump in TV viewing associated with this step.

And even more if you add children to the mix.  Kids don’t start being “independent” for the first twelve years or so on the average.  While they are at home, the partner “kids” now have to entertain them, to build a set of shared experiences that we would call “family life”.  Their TV viewing soars at this point, and while we don’t have full data on how mobile-video-exposed kids behave as senior citizens yet, it appears that it may stay high for the remainder of their lives.

These lifecycle changes drive viewing changes, and this is why Neilson and others say that TV viewing overall is increasing even as it’s declining as a percentage of viewing by people between 18 and 34.  If you add to this mix the fact that in any stage of life you can find yourself sitting in a waiting room or on a plane and be bored to death (and who shows in-flight movies anymore?), you see that mobile viewing of video is here to stay…sort of.

The big problem that TV faces now isn’t “streaming” per se, it’s “on-demand” in its broadest sense—time-shifted viewing.  Across all age groups we’re seeing people get more and more of their “TV” in non-broadcast form.  Competition among the networks encourages them to pile into key slots with alternative shows while other slots are occupied by the TV equivalent of stupid pet tricks.  There are too many commercials and reruns.  Finally, we’re seeing streaming to TV become mainstream, which means that even stay-at-homes can stream video instead of watching “what’s on”.

I’ve been trying to model this whole media/video mess with uncertain results, largely because there are a huge number of variables.  Obviously network television creates most of the original content, so were we to dispense with it we’d have to fund content development some other way.  Obviously cable networks could dispense with “cable” and go directly to customers online, and more importantly directly to their TV.  The key for them would be monetizing this shift, and we’re only now getting some data from “on-demand” cable programming regarding advertising potential for that type of delivery.  I’m told that revenue realization from streaming or on-demand content per hundred views is less than a third of channelized real-time viewing.

I think all of this will get resolved, and be resolved in favor of streaming/on-demand in the long run. It’s the nature of the current financial markets to value only the current quarter, which means that media companies will self-destruct the future to make a buck in the present.  My model suggests that about 14% of current video can sustain itself in scheduled-viewing broadcast form, but that ignores the really big question—delivery.

If I’m right that only 14% of video can sustain broadcast delivery then it would be crazy for the cable companies to allocate the capacity for all the stuff we have now, a view that most of the cable planners hold privately already.  However, the traffic implications of streaming delivery and the impact on content delivery networks and metro architecture would be profound.

My model suggests that you end up with what I’ll call simultaneity classes.  At the top of the heap are original content productions that are released on a schedule whether they’re time-shifted in viewing or not and that command a considerable audience.  This includes the 14% that could sustain broadcast delivery and just a bit more—say 18% of content.  These would likely be cached in edge locations because a lot of people would want them.  There’s another roughly 30% that would likely be metro-cached in any significant population center, which leaves about 52% that are more sparsely viewed and would probably be handled as content from Amazon or Netflix is handled today.

The top 14% of content would likely account for about two-thirds of views, and the next 30% for 24% of views, leaving 10% for all the rest.  Thus it would be this first category of viewing, widely seen by lots of people, that would have the biggest impact on network design.  Obviously all of these categories would require streaming or “personalized delivery”, which means that the total traffic volume to be handled could be significant even if everyone were watching substantially the same shows.

“Could” may well be the important qualifier here.  In theory you could multicast video over IP, and while that wouldn’t support traditional on-demand programming there’s no reason it couldn’t be used with prime-time material that’s going to be released at a particular time/date.  I suspect that as on-demand consumption increases, in fact, there will be more attention paid to classifying material according to whether it’s going to be multicast or not.  The most popular material might well be multicast at its release and perhaps even at a couple of additional times, just to control traffic loads.

The impact of on-demand on networking would focus on the serving/central office for wireline service, and on locations where you’d likely find SGWs today for mobile services (clusters of cells). 

The goal of operators will be to push caches forward to these locations to avoid having to carry multiple copies of the same videos (time-shifted) to users over a lot of metro infrastructure.  So the on-demand trend will tend to encourage forward caching, which in turn would likely encourage at least mini-data-center deployments in larger numbers.

What makes this a bit harder to predict is the neutrality momentum.  The more “neutral” the Internet is, the less operators can hope to earn from investing in it.  It seems likely that the new order (announced but not yet released) will retain previous exemptions for “interior” elements like CDNs.  That would pose some interesting challenges because current streaming giants like Amazon and Netflix don’t forward-cache in most networks.  Do operators let them use forward caches, charge for the use of them, or what?

There’s even a broader question, which is whether operators take a path like that of AT&T (and in a sense Verizon) and deploy an IP-non-Internet video model.  For the FCC to say that AT&T had to share U-verse would be a major blow to customers and shareholders, but if they don’t say that then they are essentially sanctioning the bypassing of the Internet for content in some form.  The only question would be whether bypassing would be permitted for more than just content.

On-demand video is yet another trend acting to reshape networking, particularly in the metro sense.  Its complicated relationship with neutrality regulations mean it’s hard to predict what would happen even if consumer video trends themselves were predictable.  Depending on how video shakes out, how NFV shakes out, and how cloud computing develops, we could see major changes in metro spending, which means major spending changes overall.  If video joins forces with NFV and the cloud, then changes could come very quickly indeed.

My Blog List

Networking Domain Jobs