By J. Sharpe Smith
May 1, 2015 — $45 billion – the gross bids at the AWS-3 spectrum auction — is the astounding number that sticks in everyone’s minds when it comes to the value of spectrum. But there is a misconception that spectrum prices have soared ever upward since auctions were implemented, according to the keynote speech by Craig Moffett, founder of MoffettNathanson Research, delivered at the Wireless Infrastructure Show, April 29, in Hollywood, Florida.
“There is a narrative out there that trees grow to the sky and spectrum prices go up with them, and there is only one direction for spectrum prices and that is up,” he said. “People believe that spectrum prices have been on this incredible upward trajectory over the years; however, the prices paid per megahertz of spectrum both at auction and in private sales have not trended upward over the years.”
In fact, Moffett said spectrum prices tend to be erratic and are not a function of supply and demand but of a “concentrated buyer market.” In certain cases, the same spectrum has been re-auctioned at a later date at a much different price.
“The number of buyers at any time drives evaluations. It is not a simple commodity,” he said. “No discernible patterns exist in either in either auctioned spectrum prices or private spectrum sale prices. It is very unsatisfactory for a forecaster.”
May 1, 2015 — The Wireless Infrastructure Show saved the best for the last session. “Mobile Network Virtualization” was perhaps one of the hottest topics on everyone’s lips and one I found absolutely invaluable. Advancements at the semiconductor metal layer have made superchips that are almost a one-stop device with uber-processing power, which can integrate a slew of functionality, making virtual machines of all types a reality.
The panel discussed how virtual networks offer a plethora of benefits. One huge one is scalability, which is probably the single most disruptive development in network management. It enables the network to put virtual base stations just about anywhere. That is the perfect solution to the number one issue – capacity.
These virtual machines can provide capacity on demand, both permanent and for peak demands and to re-map users. This provides better throughput and it balances the entire network as the dynamics of the networks shift. They can offload traffic close to the source, and feed it via backhaul to “big box” processing centers. Other benefits include better cell edge dynamic and the ability to mix and match applications – much like the Windows computers do with applications. And, they simplify expansion to network build outs.
Sounds like the best thing since the Large Hadron Collider and the discovery of the Higgs boson particle. But as much as virtual machines have to offer, they still face some challenges.
The panel brought to light several issues, one of which is that it is new software, something the carriers aren’t fond of beta testing. MNOs are, traditionally, conservative, the presenters noted. They have developed a carrier-grade reliability for present networks and aren’t chomping at the bit to try something new, such as the middleware required to run virtual systems. And, so far there isn’t a lot of understanding of the technology. A pretty steep learning curve is necessary and carriers aren’t anxious to foot the cost and time of the education cycles. Carriers have a lot of KPIs that need to be met. Current networks meet them but virtual networks don’t have much history with them, yet.
In the end, virtual networks will need time to become a reality, according to the panelists. Capacity will be the great equalizer since current hardware solutions will not work much longer as network demands become more and more dynamic and pressures to keep costs under control mount. Network virtualization will evolve and will have to be adopted eventually, according to the consensus. But not until the same QoS as hardware networks can be proven.
That’s all for now…back to the gala…
Ernest Worthman is the editor of Small Cell Magazine.
May 1, 2015 — Another very interesting session at the Wireless Infrastructure Show, held April 29 in Hollywood, Florida, was “Mobile Network Densification.” This is becoming a top issue for small cell technology and deployments. Their low power and small propagation footprint means that, for ubiquitous coverage, they will have to be literally everywhere.
The model is a bit fuzzy. Unlike their macro brethren, small cells are mostly single operator, making the ROI much more difficult to materialize. Macro sites can generate revenue from many different tenants and lots of users. Not so with small cells and the discussion went pretty deep on how that can be addressed. Panelists touched on the products and applications that can be used to generate revenue. Unfortunately, there aren’t a lot of answers yet.
Other discussions revolved around the availability of infrastructure, such as street furniture, building and utility poles, as well as the different challenges each one of those can have. As well, the topics of getting cheap and available power and backhaul were discussed.
Another issue was the signal properties around the cell edge – latency and the various types of interference. With densification, these issues are at least an order of magnitude more omnipresent that with macro cells.
Perhaps the most interesting perspective came in the form of the discussion about how small cells technology is catching up to the macro cell and they are becoming “mini-macro cells” in functionality (see Nokia story). A rather bold and interesting statement. And a novel approach proposed was to start selling them in that vein – to make sharing a major goal and make them multi-application (both licensed and unlicensed technologies.)
Ultimately, the panel noted, small cells have second and third generation technology available, but there are not yet any large-scale deployments of even first generation devices. They felt that first generation deployments will be the test bed used to gain experience.
So the takeaway from this session was small cells are ready to go, but face some stiff challenges. Yet the optimism was still there, and the yottabytes of data will be the great equalizer.
April 29, 2015 — Chris Stark, chief business development officer North America, Nokia Networks, waxed poetic about the profound potential impact of future wireless technology on the lives of people in his keynote on the first day of the Wireless Infrastructure Show, held yesterday in Hollywood, Florida, but he also detailed some of the barriers blocking the network densification needed to make that dream a reality.
In the next decade, wireless technology will change our lives, helping people to be more productive and healthy, and reducing highway deaths, water use, pollution, and traffic jams. To achieve these benefits, networks are going to have to become a lot denser, Stark said.
“A number of technologies will have to come together to give you what would seem like infinite bandwidth, seamlessly connected across all these applications. You already see carriers connecting Wi-Fi and LTE; you can make calls over Wi-Fi and you can roam straight onto the LTE network,” he said. “In the future the layers will become even more seamless.
In the future, cars will be like theaters with different entertainment options and will drive themselves, Stark said. He noted that earlier this year, an autonomous car drove across the United States, 3,400 miles, in nine days. Studies show that there will be 10 times more connected devices than people in the future, increasing the complexity of the network and possibly the carriers’ costs, he added.
“The complexity of the network is going to increase even more with densification; it will be a complexity in the number of sites, the type of sites and where we are going to put the sites,” Stark said. “The cloud is going change how networks are built and it will impact the physical sided of the infrastructure.”
The hype surrounding small cells has been around several years now, and Stark noted that deployment forecasts, which projected three quarters of a million small cells by 2017, are falling short. The slow roll out is not because the technology has been a disappointment, but because site acquisition and development issues, he added.
Stark found the answers in a Infonetics global small cell study of 21 carriers, which said the biggest challenge was site acquisition, followed by jurisdiction approval, working with building owner and backhaul availability.
Site development costs and inconsistency are also issues, according to a Nokia study. The OEM analyzed 1,000 potential sites for small cells in a certain city and how much it would cost to develop each of those sites. The costs range from $20,000-$30,000 up to more than $100,000.
“We need to remove the uncertainty surrounding what it is going to cost to build a particular site. Uncertainty leads to a pullback in terms of the launch of a massive small cell deployment,” he said. “A lot of the problems have to do with jurisdictions providing permits. We are not doing a good enough job of describing the benefits of how the cities will benefit from dense networks from an economic development perspective.”
Broadband Wireless: Not Just Dancing Kitties
Stark noted that broadband wireless is known for its entertainment and infotainment value, and he suggested that the wireless infrastructure industry develop an educational program for municipalities to describe the benefits of dense, broadband networks from a business standpoint.
“What needs to change to bring about certainty in small cell deployment? If we could come up with the value to the city of a single small cell location from an economic impact of that location standpoint. A site value index could tell the city the economic impact of that location if it had a small cell location,” Stark said.
Better informing municipalities is one of the keys to lowering the barriers to small cell site development and an industrialized solution to deliver the high value builds, according to Stark.
By Ernest Worthman
Today’s wireless world is changing so fast that the people who say it can’t be done are often interrupted by those that are already doing it. The author is unknown but how true that statement is. And PCIA is right there, bringing proof of that to you. Not just at trade shows, but on a daily basis, all year, and almost every day.
The Wireless Infrastructure Show brings new and innovative offerings from cutting-edge players that are conquering the challenges that small cells deployments face. Tuesday’s sessions ranged from edge-of-the-envelope technological presentations to reality checks on the legal landscape.
One session that was particularly well attended discussed one of the most pressing concerns of small cell deployments – backhaul.
Backhaul has several issues, but one of the most complex is power. It is easy to drop a small cell just about anywhere, but getting it to power up can be more challenging. Small cell deployments near established power infrastructures don’t present much of a problem. It is the remote locations that do.
It would seem simple just to run power and backhaul systems from a remote site back to the power and distribution system. However, there are a number of factors that complicate it, especially if a considerable distance is involved.
Setting aside the regulatory issues such as electrical regulations as set forth by the NEC, and local codes, deciding how to power remote nodes is multifaceted. For example, power cable metrics over distance, backup power requirements, types of power available (supplied vs. local, i.e., solar or wind) are all design criterion that are part of the equation. As well, capex and opex need to be considered, as does maintenance and technical support.
The biggest issue is keeping the remote site up – how much redundancy should be built into the site to make it always on. That essentially translates into battery backup. The session had a much focused segment that discussed battery and battery technology. Great stuff if you are involved in remote small deployments and certainly one of the highlights of this session.
Another highlight of this session was the discussion on hybrid cable. The panelists did an outstanding job covering the technical aspects of fiber/copper hybrid cables and the advantages and problems one can bump up against when deploying it.
On the non-technical front there was a session on property logistics and small cell deployments. It was a round table discussion where the panelists answered question about where to put small cells and how to go about getting them placed.
Interestingly, three basic issues surfaced in when deploying small cells – location, power and backhaul. The panel brought up issues such as just because you want a small cell there doesn’t mean it will work out that way. And even in a small cell location is acceptable, power and backhaul have to be available or, able to be made available.
The panelists also discussed a major hot button – zoning. A reality check is that placing a small cell can be just about as complex as a macro cell. There is a great deal of coordination that must take place, especially if the cell is going to be on utility or railway property, which is often a desired situation. In many cases, negotiating the placement of a small cell involves contract law and complex legal documents around rights of way, and adjacent property effects. Overall a bit of an eye opener to the logistics that can impede or complicate small cell deployments.
Finally, one of the more lively panels revolved around the future of macro sites against the backdrop of the predicted one million small cells expected to be deployed by the end of 2016.
There were some interesting arguments from panelists with macro site interests, and they made some valid points. For example, where is the tipping point regarding a single macro site vs. a number of small cells? A complex topic since costs vary for each due to a multitude of factors. But the fact remains that in some cases, the cost of a macro site can be less that the equivalent group of small cell for the same coverage metric.
As well, panelists discussed the option of upgrading a macro site with cutting edge technology, such as agile antennas, sectorizing, power control, and tighter, more precise transmission envelopes, to solve coverage and loading issues. This, as opposed to deploying small cells to resolve these issues.
They also discussed how new technologies, such as LTE and LTE-A can up the performance ante of macro sites. Again, in lieu of off-load to small cells.
Finally, the opex/capex discussion reared its ugly head. It may be cheaper to deploy a series of small cells initially, but what will be the amortization metric over the long run? Would it be more cost effective than a new/upgrade macro site?
Over all a good, though provoking discussion addressing the angle that the macro infrastructure is built out, and small cells are the deployment platform of the future.