Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Monday, June 12, 2017

Choosing the right cloud provider - AWS vs Azure vs Google Cloud Platform



AWS vs Azure vs GCP
There are three main players in the race for cloud services providers - between them, they provide all products you might need for moving your business to the cloud. But these product offerings differ in pricing as well as the naming of their services - which can be confusing at times.
Here’s a quick guide to compare the services provided by the three main players - Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP).
Please note - this is by no means a comprehensive document - the product offerings by AWS, Azure and GCP are vast and it is always a good option to consult with a trusted technology partner to select the correct provider for your requirements. This guide is intended to make you familiar with some of the most commonly needed products for cloud.

Why move to the cloud?

Companies from both the public & private sector (Netflix, AirBNB, PBS and many more) increasingly rely on cloud services for their online operations. This allows them to concentrate on their core business idea and leave the infrastructure needs to the experts. In this day and age of constantly changing demand, setting up a physical infrastructure to provide such online services would get very expensive and time-consuming - with a team of technicians, lot of extra budget for servers, constantly changing hardware requirements and so on. With the help of cloud offerings provided by these players, a lot of startups find it quick and affordable to start their own ventures without these technical challenges.

Pricing

Pricing for products and services you use from these providers will depend largely on the computing power you need, the number of instances you deploy and the location. But some major advantages of using the cloud as compared to setting up your own physical infrastructure are
  • No upfront investment
  • Pay for what you use
  • No termination fees
  • Easy scaling up/scaling down
You can read up about the pricing details here - AWSGCPAzure

Amazon Web Services (AWS) vs Microsoft Azure vs Google Cloud Platform (GCP)

Amazon introduced IaaS (Infrastructure As A Service) at their first AWS service launched in 2004. It has been a front-runner in the race and has since kept adding features and offerings - giving them an upper hand over competition. To an extent, they are the most expensive in some regards. But they are also the best in terms of service offerings. Google and Microsoft joined the bandwagon later but quickly caught up. This did help bring down the prices which ultimately helps consumers.
Here’s a quick comparison of all the offerings provided by AWS, Azure and GCP and how it affects pricing.

1. Computation Power

This is the core of the services offered by computers - to calculate, to process data — to compute. To get faster processing for data analysis, graphics rendering or faster response time, you either need more hardware OR you can go to the cloud. If you buy hardware, it is yours. But it is also very expensive - purchasing, maintenance and all the idle time where they are not in use. On the other hand, moving to the cloud, you only pay for what you use and it is much easier and faster to scale up or scale down depending on the demand.
Amazon provides EC2 (Elastic Compute Cloud), their on-demand scalable instances. Google provides Compute Engine and Azure has Virtual Machines. While EC2 is arguably the most comprehensive, it can burn a hole in your pocket if you are not careful - the pricing is also a little complex to understand. The same goes for Microsoft Azure’s VMs as well. Google’s Compute Engine has very simple pricing, but it is also the least flexible.
With AWS Elastic Beanstalk and Google App Engine, you also have the option of buying computing processes for mobile and web apps - which significantly reduces cost compared to using EC2 or Compute Engine IF your app fits the specs for these services.
All three providers provide Docker capabilities with AWS ECS, Google Container Engine and Azure Container service.
Azure has another advantage in that it allows deploying Windows client apps with a RemoteApp service which the other two lack.

2. Storage

Storage is another important criteria for cloud services. Using any of these cloud providers, you can store data to the tune of some GBs to many petabytes.
For storage, Amazon’s S3 (Simple Storage Service) has been around for long and is very well documented with many samples as well as third-party libraries. Google’s Cloud Storage and Azure’s Storage also provide equally reliable services, but their resources are not as extensive as S3. Although, Google and Azure somewhat beat S3 on the pricing.
ServiceProviderPricing
Cloud StorageGoogle$0.026 (standard) / $0.02 (Durable Reduced Availability)
Data Lake StorageAzure$0.04
S3AWS$0.03 (standard) / $0.0125 (infrequent)
Azure StorageAzure$0.024 (Locally Redundant) / $0.048 (Geographically Redundant) / $0.061 (Read access - Geographically Redundant)
Block StorageRackspace$0.12
Cloud FilesRackspace$0.1
Archiving services are provided at lower rates compared to cloud storage, but it also provides lower access speed. Archiving services are generally used for storing objects which are not accessed regularly. The archiving services provided by Amazon, Azure and GCP are Glacier (AWS), Azure Backup (Azure) and Cloud Storage Nearline (GCP). Azure and AWS also provide solutions for archiving - Data Archive (AWS), Backup and Archive (Azure)
ServiceProviderPricing
Cloud Storage NearlineGoogle$0.01 (storage) + $0.01 (retrieval)
GlacierAWS$0.007
StorageAzure$0.01 (Locally Redundant) / $0.02 (Geographically Redundant) / $0.025 (Read access - Geographically Redundant)
Apart from storage and archiving, all three providers also provide services for creating content delivery networks. AWS provides CloudFront, Azure has a Content Delivery Network and GCP has Cloud CDN.

3. Analytics

Now comes the interesting and probably a very important part - using the computing, storage and content delivery using analytics. Usually when it comes to solutions hosted on the cloud, they deal with large data sets (big data). And unless we use analytics on this data and gather some important insights, it is pretty much useless.
Analytics on big data can help with many things - predictions, new offerings, cross-selling and so on. But analytics on such large data sets requires specific tools and programming models. Google was forefront in this race due to MapReduce - they came up with a number of products for users - BigQueryCloud Dataproc (managed Spack & Hadoop), Cloud Dataflow (data processing in real time), Cloud Datalab, Cloud Pub/Sub (streaming data and messaging) and even Genomics (for processing genomic data)!
Amazon provides Elastic MapReduce and Azure provides HDInsight. All three also provide various Big Data Solutions you can use. You can find them here - GCP Big Data solutions, AWS Big Data offerings, Azure Big Data solutions.
While Google has taken the lead on Big Data, Amazon takes the cake with QuickSight - an offering they provide for businesses to make sense of large amounts of structured and unstructured data. It helps businesses identify opportunities based on data sets and various strategies. This helps businesses where they don’t see the need to have a team of data scientists implementing solutions for analyzing their data - but still need to gather insights from the data they capture.
Analytics will also likely require you to use Machine Learning. Google is ahead in the race on this one too, with Cloud Machine Learning as well as offering products which they use for their own apps in specific areas - Cloud VisionSpeechNatural Language Processing and Translate. The Cloud ML alternatives provided by AWS and Azure are Amazon Machine Learning and Azure Machine Learning.

4. Location

When deploying your cloud services, location is very important - you should choose a data center which is close to your primary users. For instance if you are providing services to users in the East Coast of United States, it would be best to deploy your services there. This will reduce the response time and also offer better user experience. CDNs also help here for faster delivery of resources.
AWS Availability
AWS regions
Amazon wins hands-down in this area - it has  the most extensive coverage closely followed by Azure (although Azure has very good coverage in Asia). Google has a good coverage in United States, but not so much in Europe or Asia.
Azure availability
Azure regions
When selecting the location, do keep in mind that different locations cost different. United States and Europe generally offer the cheapest options. You can find more information here - AWS global reachAzure regions, Google cloud locations
Google cloud locations
Google cloud locations

5. Other products & services

These are the main service offerings provided by any cloud providers. But that is not the end. Here’s a quick list of other resources/services you might be interested in

a. Networking

  1. DNS - Amazon Route 53Google DNSAzure DNS
  2. Virtual Private Networks - Amazon VPC, Google Cloud Virtual Network, Azure VPN Gateway
  3. Load Balancing - Amazon ELB, Google Cloud Load Balancing, Azure Load Balancer

b. Database

  1. SQL - Amazon RDS (has support for multiple DBMS), Azure SQL DatabaseSQL Data warehouseSQL Server stretch database, Google Cloud SQL (only supports MySQL as of now)
  2. NoSQL - Amazon DynamoDB, Azure DocumentDB and Table storage,Google Bigtable and Cloud Datastore
  3. Cache services - Amazon ElastiCache, Azure Redis
  4. Other cloud database offerings by Amazon
Then there’s developer tools, security offerings, disaster recovery tools and so on.

Other Service Providers

While Amazon, Microsoft and Google are the big names in cloud providers, there are many small providers who offer some very competitive pricing. Many of them focus on indie developers rather than companies and can be worth giving a shot - esp if you need moderate scaling. Some examples include
  • Digital Ocean
  • Rackspace Cloud
  • Linode
  • Vultr


Sunday, June 11, 2017

What is Ruby on Rails



What is Ruby on Rails?

Ruby on Rails (ROR) is a web development framework developed in Ruby. It simplifies and abstracts repetitive tasks and aids in speedy application development. Programmers love this language because of its simplicity, agility, elegance and speed.

The key principle of RoR is convention over configuration. Rails comes with a set of conventions which saves programmers time in configuring files in order to set-up.

Why ROR for App Development?

Ruby on Rails - a framework built using Ruby. Its aims at rapid application developemtn for mobile and web applications.



Rails is a start-up friendly language. It is easy to use and understand and provides ample scope for development and scalability. Many startups in the recent decade have opted for Ruby. These include names like GitHub, Basecamp, SlideShare, Groupon etc. Rails is so loved because of it’s developer friendly command line and online support. To ease it for you, we have listed some of the most prominent plus points of Ruby on Rails.

Rapid Development:

Speed is the biggest competitive advantage of RoR. Rails is developed to support rapid application development(RAD). The Modal View Controller (MVC) in ROR is beautifully implemented and it focuses on (DRY) Don’t Repeat Yourself. These principles reduce the time consumed on app development and help programmers work at fast pace. Availability of good testing frameworks also helps in bug fixing. The ease of accommodating changes makes Rails the best pic for fast application development.

Adaptability

The code in rails is more like read as you go type. If properly written, the code needs very less documentation. This means that it is easier for new developers to catch-up on the project and start working. The self-documentary feature saves developers time spent in writing documentations.

Agile Methodology

Agile is the use of incremental, iterative and empirical processes to respond to project uncertainties. There is a wide gap between the thinking of a developer and the client, on how development works. Agile methodology is the way to improve communication and handle changing client’s requirements in real time. It is an alternative approach to traditional sequential development.

Cost Cuts

Rails is built on a completely open source platform. The combination of Linux and Ruby (both open source) provide great incentives for start-ups as they are always tight on budget. Since Rails and most of its libraries are open source, there are no costs involved in licensing. Not to forget, savings in development time are also cost cuts.

The CATCH

The only two downsides of Rails are the lack of experienced professionals and compatibility issues. Finding an experienced programmer is difficult as there are very few techies who are good with Rails. Many experienced professionals in Java and PHP have still not adopted RoR completely. ROR is developing at a very fast speed and its techie friendly nature aligns with the skilled developer's mindset.

RoR is much more resource intensive as compared to PHP thus many hosts are yet to develop support for it. Rails is usually not a good bet for low-end shared hosts. But this by no means is an end in itself. Heroku and EngineYard are among the best hosts providing such services. Alternatively, you can host it on a Virtual Private Server (VPS) with Amazon EC2, Rackspace, or Linode. You can then enjoy complete control over your server and can allocate resources to your application as needed.



Amazon DynamoDB vs MongoDB


In the past few years, we have seen a huge change in trends in the space of databases. A new emerging trend has been the usage of NoSQL ("not only SQL"). NoSQL came into picture when developers realized the need for an agile delivery system which is easily able to process unstructured data. The system needed to be extremely dynamic and relational databases did not quite cut it. A relational database model may not be the best solution for all situations. An alternative, more "cloud-friendly" approach was to employ NoSQL instead. This is where Amazon DynamoDB and MongoDB come in handy.

amazon dynomodb vs mongodb - complete comparison and meaning
 

NoSQL is a whole new way of thinking about databases. It is not a relational database. MongoDB, the most famous of NoSQL offerings, uses a documents model. A MongoDB database holds a collection which is a set of documents. Embedded documents and arrays reduce the requirement for joins, a key parameter for high performance and speed.


In January 2012, Amazon announced their NoSQL service, DynamoDB. While not the first in the league, it is definitely a game changer. Most people think of MongoDB as the epitome of NoSQL because of its ease of use compared to other databases. But it has major drawbacks in terms of data management.

5 features of Amazon DynamoDB that are very compelling as compared to its competitors:
1. Analytics

Extending the application of analytics to a database has not been as easy. Ideally, we would just want to send a request to the database and have it send back the result when ready. But every NoSQL database has failed in this aspect. MongoDB has major limitations when running map-reduce jobs.

On the other hand, DynamoDB integrates with Elastic Map Reduce and reduces the complexity of analyzing unstructured data. This is a BIG plus

2. Relaxed vs Strong consistency

With Amazon DynamoDB, there is no need to hardcode replication of values. Once you make a choice, everything works like it should. Like all Amazon product offerings, read and write units can be adjusted based on actual usage. In our opinion, nothing beats this!

3. Ease of getting started

Amazon poses a real threat to competition by offering a hosted solution. If you have an AWS account, getting started with DynamoDB is as simple as making a single API call! With other NoSQL solutions, the developer must have the right servers, installations and configurations. With DynamoDB, they just need to concentrate on the application and let AWS handle the rest.

4. Performance

Amazon, like AWS, provides amazing performance with DynamoDB. They give single digit latency even on very heavy loads. All data is synchronously replicated across all availability zones AND there is NO downtime even while there are throughput updates! The use of SSDs is a real game changer for random reads and updates.


5. Pay for use

Amazon lets you buy operations per second capability rather than CPU hours or storage space. This removes a whole lot of complexity for developers who would otherwise need to tune the database configuration, monitor performance levels, ramp up hardware resources when needed. This provides users a fast and reliable storage space for their needs with costs that scale in direct proportion to the demand.


With all its advantages, even if we consider a few flaws DynamoDB does have, as a product it comes closest to fulfilling the promise of NoSQL compared to its competitors: Easy-to-use structured storage without the complexity of managing SQL servers and the reliability and performance benefits of scaling out.

Friday, June 2, 2017

The Blockchain and Us (2017)



The Blockchain and Us (2017)



Blockchain & Bitcoin TedTalks



The Blockchain Revolution



Blockchain technology : From Hype to Reality 



Bitcoin for consumers today





Bitcoin & Blockchain - Industry Leaders Speak


Bill Gates - Strongest Believer of this Technology




Richard Barrason On Bitcoin


Ex Facebook Founders on Bitcoin & Blockchain Technology











Wednesday, May 31, 2017

Smart contract use cases in industry


Paper contracts can take weeks to travel around the globe, while digital documents are uncomfortably easy to forge. Is there a way to automate transactions to make them smoother, more efficient, and more secure for all parties? Leaders are looking at blockchain and smart contracts as a viable solution.


Blockchain technology is generating significant interest across a wide range of industries. As the field of applications for blockchains grows, industry leaders are customizing and tailoring the technology to fit very particular uses. Blockchain-based smart contracts—self-executing code on a blockchain that automatically implements the terms of an agreement between parties—are a critical step forward, streamlining processes that are currently spread across multiple databases and ERP systems. Smart contracts in the commercial realm have not yet been proven, but we believe that permissioned blockchains (those that are privately maintained by a small group of parties) in particular will find near-term adoption. Two blockchain-based smart contract use cases—(1) securities trade clearing and settlement and (2) supply chain and trade finance document handling—carry important lessons for business and technology leaders interested in smart contract applications. 

SIGNALS 

Smart contract VC-related deals totaled $116 million in Q1 of 2016, more than twice as much as the prior three quarters combined and accounting for 86 percent of total blockchain venture funding 
An Ethereum-based organization has raised over $150 million to experiment with and develop smart contract-driven applications.

  • The Australian Securities Exchange is developing a blockchain-based post-trade solution to replace its current system
  • The Post-Trade Distributed Ledger Group, an organization launched to explore post-trade applications on the blockchain, has 37 financial institutions as members
  • Five global banks are building proof-of-concept systems with a trade finance and supply chain platform that uses smart contracts
  • Barclays Corporate Bank plans to leverage a smart contract bill-of-lading platform to help its clients reduce supply chain management costs
  • The state of Delaware announced initiatives to utilize smart contracts for state-recognized “distributed ledger shares” and to streamline back-office procedures


WHAT ARE BLOCKCHAIN-BASED SMART CONTRACTS?


Smart contracts represent a next step in the progression of blockchains from a financial transaction protocol to an all-purpose utility. They are pieces of software, not contracts in the legal sense, that extend blockchains’ utility from simply keeping a record of financial transaction entries to automatically implementing terms of multiparty agreements. Smart contracts are executed by a computer network that uses consensus protocols to agree upon the sequence of actions resulting from the contract’s code. The result is a method by which parties can agree upon terms and trust that they will be executed automatically, with reduced risk of error or manipulation.


Technology leaders envision many applications for blockchain-based smart contracts, from validating loan eligibility to executing transfer pricing agreements between subsidiaries. Importantly, before blockchain this type of smart contract was impossible because parties to an agreement of this sort would maintain separate databases. With a shared database running a blockchain protocol, the smart contracts auto-execute, and all parties validate the outcome instantaneously and without need for a third-party intermediary.


But when should companies employ blockchain-enabled smart contracts rather than existing technology? They can be a worthwhile option where frequent transactions occur among a network of parties, and manual or duplicative tasks are performed by counterparties for each transaction. The blockchain acts as a shared database to provide a secure, single source of truth, and smart contracts automate approvals, calculations, and other transacting activities that are prone to lag and error.

BLOCKCHAIN-BASED SMART CONTRACT BENEFITS


For a wide range of potential applications, blockchain-based smart contracts could offer a number of benefits:


Speed and real-time updates. Because smart contracts use software code to automate tasks that are typically accomplished through manual means, they can increase the speed of a wide variety of business processes.

Accuracy. Automated transactions are not only faster but less prone to manual error.

Lower execution risk. The decentralized process of execution virtually eliminates the risk of manipulation, nonperformance, or errors, since execution is managed automatically by the network rather than an individual party.


Fewer intermediaries. Smart contracts can reduce or eliminate reliance on third-party intermediaries that provide “trust” services such as escrow between counterparties.


Lower cost. New processes enabled by smart contracts require less human intervention and fewer intermediaries and will therefore reduce costs.


New business or operational models. Because smart contracts provide a low-cost way of ensuring that the transactions are reliably performed as agreed upon, they will enable new kinds of businesses, from peer-to-peer renewable energy trading to automated access to vehicles and storage units.

SMART CONTRACT USE CASES

To determine high-impact areas of potential, Deloitte’s analysis of smart contract use cases considered a number of factors, including: a sizable market opportunity; the presence of active, relatively well-funded start-ups targeting the opportunity; the participation of prominent investors; technical feasibility and ease of implementation; and evidence of multiple pilots or adoption by corporations. The lowest-hanging fruits today are applications in which contracts are narrow, objective, and mechanical, with straightforward clauses and clearly defined outcomes.


We have identified a range of applications—ranging from smart health records to pay-as-you-go insurance—that companies are piloting right now (see table). Using the criteria above, two use cases stand out for their immediacy to market: trade clearing and settlement and supply chain and trade finance.
ER_2833_Table.1a


Trade clearing and settlement


Blockchains provide a single ledger as the source of truth, and smart contracts offer the ability to automate approval workflows and clearing calculations that are prone to lag and error—thus reducing errors, cost, and the time to settlement. Trade clearing and settlement often entails labor-intensive activities that include various approvals and/or complex internal and external reconciliations. Banks maintain substantial IT networks, but independent processing by each counterparty causes discrepancies that lead to costly resolutions and settlement delays.


The opportunity to streamline clearing and settlement processes with the blockchain and smart contracts is immense. In 2015, the Depository Trust & Clearing Corp. (DTCC) processed over $1.5 quadrillion worth of securities, representing 345 million transactions. Santander Bank’s innovation fund, Santander Innoventures, expects blockchain technology to lead to $15–20 billion in annual savings in infrastructure costs by 2022. Seven start-ups, retaining funding of over $125 million, have platforms or services targeting this space: The list of more than 35 investors behind these companies is equally impressive; it includes not only major venture funds such as Khosla Ventures and SV Angel but also large banks such as Citigroup, JP Morgan, and Santander, and other organizations such as NASDAQ and the DTCC itself.

Wall Street has also been busy exploring this space. More than 40 global banks within the R3 consortium participated in testing that included clearing and settlement activity, and many of those banks have pursued further trials individually. The Australian Securities Exchange is also working on a smart contracts-based post-trade platform to replace its equity settlement system, and four global banks and the DTCC recently ran a successful trial of a smart contracts solution for post-trade credit default swaps.


Supply chain and trade finance documentation

Blockchains can make supply chain and trade finance documentation more efficient, by streamlining processes previously spread across multiple parties and databases on a single shared ledger. All too often, supply chains are hampered by paper-based systems reliant on trading parties and banks around the world physically transferring documents, a process that can take weeks for a single transaction. Letters of credit and bills of lading must be signed and referenced by a multitude of parties, increasing exposure to loss and fraud. Current technologies haven’t addressed this issue because digital documents are easy to forge; even current IT systems at banks simply track the logistics of physical documents for trade finance. A blockchain can provide secure, accessible digital versions to all parties in a transaction, and smart contracts can be used to manage the workflow of approvals and automatically transfer payment upon all signatures being collected.

Because current paper systems drive $18 trillion in transactions per year, there’s an attractive opportunity to decrease costs and improve reliability in supply chain and trade finance. Four start-ups have emerged in this area, all of which have noted engagement with banks in proof-of-concept activities. Funding has not been disclosed, but backers include three respected venture funds in addition to Barclays.

A number of corporations have also shown mounting interest in this area. Seven banks have revealed proof-of-concept testing, and the numbers noted by start-ups indicate more that haven’t been publicly revealed. One start-up in particular noted implementation roadmaps with five banks as well as a major insurer. Barclays Corporate Bank recently partnered with one of the start-ups, Wave, a platform that stores bill-of-lading documents in the blockchain and uses smart contracts to log change of ownership and automatically transfer payments to ports upon arrival. Bank of America, Standard Charter, and the Development Bank of Singapore are also among the banks pursuing proof-of-concepts of their own.


WHAT TO WATCH

Smart contract technology is still in its early stages. Business and technology leaders who want to stay current on implications of smart contracts should track both technology and business developments surrounding smart contracts.

On the technology side, certain advances will help broaden the applications and adoption of smart contracts.

Scalability. Smart contract platforms are still considered unproven in terms of scalability.

External information. Because smart contracts can reference only information on the blockchain, trustworthy data services—known as “oracles”—that can push information to the blockchain will be needed. Approaches for creating oracles are still emerging.


Real assets. Use cases that effectively link smart contracts to real assets are still in their infancy.

Flexibility. The immutability of blockchain-based smart contracts today means that developers must anticipate any conceivable scenario necessitating changes to the contract.

Privacy. The code within smart contracts is visible to all parties within the network, which may not be acceptable for some applications.

Latency. Blockchains suffer from high latency, given that time passes for each verified block of transactions to be added to the ledger. For Ethereum, the most popular blockchain for smart contracts, this occurs approximately every 17 seconds—a far cry from the milliseconds to which we are accustomed while using non-blockchain databases.

Permissioning. While excitement for smart contracts is growing in the realm of both permission-less and permissioned blockchains, the latter is likely to see faster adoption in industry, given that complexities around trust, privacy, and scalability are more easily resolved within a consortium of known parties.

Watch for major trials or deployments that achieve new milestones in scalability, or technologies that successfully address issues of privacy or enable greater trust of oracles. These are key signs of maturity, signaling that smart contracts are positioned for wider adoption.

On the business side, new capabilities and business models that extend beyond the digital realm driven by smart contracts will emerge in the coming months. For instance, start-ups have already paired smart contracts with IoT devices to provide access via smart locks or automatically enable electric vehicle charging stations. Pushing IoT sensor data to the blockchain will also open up countless possibilities; among them, look for new business models that are based on usage rather than time, and applications that employ micropayments automatically.

Revised legislation that accomodates smart contracts or recognition of smart contracts by legal authorities will also be critical for some applications of smart contracts. This will be another signal to watch for that indicates the technology is positioned for wider adoption.

CONSIDERATIONS FOR CORPORATIONS

Business leaders who may not be closely following blockchain developments should consider examining the technology and evaluate how it can be paired with smart contracts to drive efficiencies or new business capabilities.

Operations executives should look to their own processes to evaluate where smart contracts may be applicable. Some factors to look for include complex and manual work flows, multiparty agreements, lack of trust between parties, and interdependent transactions. Likewise, ideating on new capabilities that could be made possible by smart contracts should be considered in the context of current strategy or innovation efforts.

Given that smart contracts represent a new model of computing, software development teams and IT leaders should consider exploring the implications of this approach. Implementing smart contracts on a blockchain will require significant integration work, and it will be important to understand the new protocols and considerations when evaluating these applications for the enterprise.

My Blog List

Networking Domain Jobs