Speeds will forever increase

One Gigabit = 1000 Megabits = 1,000,000 Kilobits = 1 Billion Bits

Speed isn’t everything, but it is the main thing.  Speeds will increase.  Networks will have to improve to keep up.  It is hard to imagine today, but the maximum Internet access speed in 1980 was 2400 bits per second (bps) over a dial-up modem.  By 1998 dial-up modems had grown to 56 kilobits per second (kbps), but broadband defined as speeds above dial-up was still on the horizon for most of us.  Today average access speed is 26 megabits per second (mbps), an increase of 36% a year since 1998. (This is the real, measured rate of data transfer, not maximum link speed.)  At that rate the average Internet access speed in ten years will sneak up on 600 mbps.  No matter what any telephone or cable television network company says about their migration to 1 gigabit, no copper-based network has the capacity to sustain average per user rates of 600 mbps.

Two forces pushing speeds up reflect the past. Video quality keeps getting better; each improvement demands higher data rates.  We seem to find more videos and other things to store, ballooning file sizes; many will beg for upstream data transfer in minutes rather than days.  But two of the pressures are new. The Internet of Things will explode the number of devices in everyone’s home demanding access to a single router; that router will have to be connected to an ever increasing data connection.  And applications are coming that demand much shorter delay between sending a call and getting a response from a server, what is usually called latency.  Lower latency not only requires higher data rates, it requires servers closer to users, and in some cases dedicated servers.  Future forms of Virtual Reality, holographic video games and video conferencing, machines in our home to keep us alive, systems our kids will use to learn by will all demand it.

Speeds Required for Various Streaming Video Formats

When television screens went digital, they became subject to technology improvements.  Better picture quality came from increasing the number of pixels on a screen.  The first digital screen supported Standard Definition television with 640 pixels across the screen and 480 pixels down.  Painting every other pixel with a 16-bit color 30 times a second (the minimum with interlaced scanning) would require a data rate around 75 mbps (480 x 640 x 16 x 30 divided by 2) a figure possible now but unthinkable in the late 1980s when digital television was making its debut.  So engineers figured out ways of compressing the digital signal, with some loss of fidelity, to around 1.5 mbps.  You have no doubt seen the effects in screen tiling and distortions when people move too fast.  The actual rate for any given video varies a small amount depending upon things like how fast the images changed and how fast the colors changed.

But that was a small television with interlaced scanning.  Television screens got bigger and interlaced scanning gave way in many cases to progressive scanning where every pixel was painted for every scan, at a rate of 60 times a second, not 30.  Compression got better as well (otherwise we would not have the better televisions).  The next generation was High Definition TV (HDTV), with a screen measured by 1920 pixels across and 1200 pixels down.  In RAW form and 16-bit color this screen size required a data rate of 2.2 gigabits per second (gbps).  Compression miraculously reduced the data rate now in use to around 7 mbps, with some infirmities in quality but a data rate sustainable over cable television networks.

Next in the history of things to come was HD video over cellular networks and wired data networks that delivered video Over The Top (OTT), that is, over the Internet rather than a cable television network.  Don’t let small screens fool you.  The iPhone X has a pixel space that measures 1125 x 2436.  Furthermore, colors have wandered up to 32 bits from 16 for some video experiences.  Painting every pixel on an iPhone X at 32 bits 60 times a second for progressive scanning requires 5.26 gigabits per second.  Radical compression can get that under 10 mbps with loss of fidelity, but a key driver for upgrading speeds in cell phone networks is cell phone video. Seventy-five percent of Internet traffic now is video.

Clearly, HD television is not going to work over old DSL networks limited by line distance to a maximum of 8 mbps (16 if using two telephone lines), with the average coming in below 4 mbps.   But it will work over cable television data channels at 25 mbps or more.  Ironically then, OTT television, the bane of cable television broadcast channels as people “cut the cord,” comes to us mostly through cable television Internet subscriptions.  Hulu, Amazon Video, and other television video services require cable television networks to have a chance in the market.

But the market is not standing still.  As screen sizes increased, many now at 60 inches or more, manufacturers figured out ways of getting more pixels per inch on the screen.  The next size, designated as Ultra High Definition Television (UHD) provides 3840 pixels across and 2160 pixels down.  The horizontal pixel size has given the name 4K to these screens.  The screen dazzles for sports and fast-paced video games.  Data rates to paint these screens now rise to 30 mbps as a kind of nominal figure with the usual variations around the number depending upon the effects of compression.  4K is the future of television; 25% of all televisions sold today have 4K screens, and the iPhone X claims 4K quality in a screen five-inches high.

Cable television networks are now stuck.  CATV networks devote a fixed amount of bandwidth to television channels.  They use modems to digitize this bandwidth and send digital televisions signals down this bandwidth.  Its total capacity is suitable to the present arrangement of Standard and HD television, but is at its limit.   So they have no capacity to send broadcast UHD (4K) material.  They are similarly limited on video-on-demand, that bandwidth fighting with increased demand for Internet channels.  According to reports cable television companies plan to upgrade their networks to eliminate the bandwidth devoted to sending all 220 channels to everyone, creating what amounts to a full video-on-demand system, hoping to free up bandwidth for non-broadcast television services.  But this is extremely expensive and as far as we know has not even begun anywhere (it changes many things in their system).

We are not through. Television companies are also developing screens with pixel spaces of 7680 x 4320, called 8K in the market vernacular.  This screen will require 90 mbps per video experience.  It does not take a lot of analysis to show that widespread use of 8K screens will require fiber optic networks.

Uploading Files

File sizes have exploded. With image and video storage eating up gigabytes at ever increasing rates, laptops are sold now with up to 4 terabytes of storage; that is 4000 gigabytes or four trillion bytes of storage. Uploading 4 terabytes to Cloud storage would take at least 123 days at the common upload speed of broadbandtoday of 3 mbps.  At fiber speeds of 1 gigabit per second, the time comes down to about 8 hours, still a long interval, but not out of the question.

The push has come as much from video and image files as anything else.  Large video files have become commonplace.  Uploading a file with 50 HD videos (about 250 gigabytes) at today’s 3 mbps would take 7 days.  Should upload speeds migrate to 50 mbps, it still takes 11 hours.  But at 1 gbps, the entire file moves to the Cloud in 33 minutes.

Slow upload speeds actually deter many useful and important applications.  If everyone had a gigabit upload rate instead of the snail-like 3 to 6 mbps rate, many more people working in information intensive businesses could work at home.  High quality video conferencing would become commonplace, making collaborative processes more effective and plentiful.  Instant cloud back-up would insure that no work effort or desirable information would ever be lost. Telemedicine machines that capture huge amounts of information would have response times that on many occasions would save lives.  Very high performance multi-player video games could be played over the Internet.  Videos could be shared with the same degree of quality and frequency as we share pictures today.  The Internet of Things, an upload glutton, will actually work.

Creating symmetric channels on today’s telephone and cable television networks is possible, but only at the expense of download speeds, which are already going to be too slow in a few years under present asymmetry that favors download far above uploads.  These networks were designed for applications with high download requirements and minimal upload requirements, such as e-mail, video streaming, and file downloads to a personal computer.  The future is going to need vast increases in upload speeds which cannot be realized on any copper-based network as configured today.  Expensive upgrades will help, but even cautious projections of demands from new applications suggest that fiber to home will be required sooner than we might expect.

Internet of Things (IoT)

Lawnmowers can now tell you when they need oil or sharpening, refrigerators can tell you when you are out of milk, and will email you if your 8-year old accidentally leaves the door open.  Myriad devices enable control of doors, windows, blinds, security cameras, lighting, and heating.  In a few years it will be hard to buy anything for a home, even a new pot or stereo speaker, that will not have a sensing device, a battery, a microcomputer, and a communications facility that will talk to its manufacturer about how it is doing or what it is doing, or talk to your computer or iPhone for status or control.  All of these devices will need access to your home WiFi for connection to your cell phone or computer app and to external resources.  Most of them will require low data rates, but when the number of such devices sails past 100, and it will, or up to 500 as some ambitious market reports suggest, the cumulative data rate starts to look fearsome.

The home of the future (not too far away) will likely have IoT devices connected together in a wireless meshnetwork where each device links to two or more other devices. Such a network can relay packets from one place to another until they reach a desired end point—either a thing, a local computer, or a router connected to the Internet.  Mesh networks beat the problem of signals not going through walls as will often happen in a more conventional star configuration, all devices talking to one router.  The communication protocols for very low-power short-distance transmission already exist (IEEE 802.15.4 or 802.11ah).  Networking protocols such as ZigBee also exist for low-power, low data rate, short distance local networks.  Battery and sensor technologies have reached points already where a container smaller than a pea includes a sensor, a battery, a low-power microcomputer, and a low-power transceiver.  They will be in all manner of devices, in wearables such as watches and medical attachments, and in our bodies.  There will be charges of Big Brother and invasion of privacy, but this will happen in some way we deem acceptable, and sooner rather than later.

Some of these devices will require immediate attention at unpredictable times.  Medical alert systems, burglar alarms, and fire alarms lead the list, but future things will be able to give advance warning before something bad happens.   As the dominant direction of data flow from the Internet of Things is from the thing that collects information and passes it on, its pressure on data rate will be in the wrong direction for today’s highly asymmetric channels.  To avoid traffic jams in the house and to provide high probability access for emergency signals, upstream data rates are going to have to grow dramatically. Fiber to the Home will be the answer.


Latency refers to the time it takes a packet to transit a network link twice, from you to the server and then back from the server to you.  It is also called delayor Round Trip Time (RTT).   Latency is affected by link speed, switch transit speed, switch congestion, server processing time, packet size, and the distance which the packet must traverse. Each adds to the time.  Which dominates depends greatly upon conditions.  But we can state limits relative to link speed and distance.

Light and electromagnetic signals only travel at the “speed of light” (186,282 miles per second) in a vacuum.  Light down a fiber optic transmission line has to contend with impurities that slow things down to around 70% of light speed.  Electromagnetic signals flowing down a copper line can be as slow as 60% of light speed.  Except for signals going to a satellite, all terrestrial signals also encounter amplifiers (repeaters) and switches in the path.  The measured time through the transatlantic fiber optic cables from the United States to Britain is about 60 milliseconds (one thousandth of a second), which cannot be improved upon (the cables include repeaters).  Transcontinental latencies are similar—63 ms from San Francisco to New York, say.  Of this figure 37 ms come from the length of the fiber optic wiring, the rest from switches and repeaters.

Latency has always been a problem, but one hidden from common view.   A web page is built through a series of calls and responses, calls from the computer, responses from the server.  There is a premium on getting the first byte there fast so a page build can start, and an equal premium on making the whole page appear as quickly as possible.  Latency extends the time to build at every call and response.  Web page designers do everything in their power to limit the number of calls during the page build.  While the Cloud in its primary application has centralized computing and storage, the “Clouds” offered by Google, Apple, and Amazon depend upon Content Delivery Networks (CDN) that locate data centers at the edge of a global network, all over the earth, which centers contain duplicate files for all content of interest, precisely to get close to a customer location so delay, or latency, is minimized.

While IP packets can be as large as 65,535 bytes, the practical upper limit (and now average large Internet packet size) is the 1500 byte Maximum Transmission Unit of Ethernet, which carries IP traffic over home and business routers and local area networks.  The full maximum of 65K adds 20 ms of delay at 25 mbps transmission rate, but only 0.5 ms at 1 gigabit.  However, at the lower common rate of 1500 bytes, delay added by 25 mbps link speed is less than 0.5 ms.  What is hard to compute as a commonplace is the effect of total packets required to send an information unit before a response can be generated.  Without compression a typical advertising image of 728 x 90 pixels with 16 bit color requires 260 packets; if compressed best chances are 10:1 ratio, or 26 packets, which takes more than 12 ms at 25 mbps to transfer.  However, at 1 gigabit, the transfer rate is 0.3 ms.

We offer these numbers because a variety of future applications will demand predictable latencies below 10 ms.  This is the figure claimed by people working on remote Virtual Reality and controlling autonomous vehicles.  Three things become apparent from the material above.  One, the server must be close, within a hundred miles or so (for autonomous vehicles it is much closer).  Two, the data rates have to be in the gigabit range in both directions.  And three, there cannot be many switches and repeaters in the way.

Do NOT follow this link or you will be banned from the site!