top of page

Six Reasons Data Analytics is the Next Big Thing in Facility Energy Management


Six Reasons Data Analytics is the Next Big Thing in Facility Energy Management

  • Published on January 29, 2016

Miles D. SmithFollowMiles D. Smith

Principal Analyst - Energy at CALIBRE Systems

Obvious Opportunities Approach Extinction

On New Year’s Day 1973, Richard M. Nixon was president, crude oil was about $3 a barrel, the environmental movement was growing its roots, and the EPA had been in existence for twenty-five months. In the absence of financial and regulatory incentives, a building’s energy consumption was only a footnote to its planning, construction, and operation. Serious efforts to manage facility energy use and costs were primarily motivated by the oil embargo of October 1973 and the energy crisis of 1979.

While now distant memories, those days are important to remember because the energy management industry was born of them. Four decades later, energy management professionals have identified, quantified, and rectified inefficient facilities and campuses. Architects have become increasingly cognizant of the critical role they play in energy design and geographic placement. Energy meters and control systems are almost ubiquitous. Astonishing advances in computing, Direct Digital Controls (DDCs), materials, HVAC, and lighting technologies have dramatically expanded the horizon of what is attainable in climate and lighting system efficacies.

While the advances are positive and impressive, additional opportunities remain abundant but increasingly challenging to find. A major difficulty facing energy professionals is that the “low hanging fruit” is mostly gone. Auditing a building for a second, third or fourth time is progressively unlikely to uncover significant opportunities. The identification of future energy consumption measures (ECMs) will derive mainly from the analysis of campus, building, system and system subcomponent data amalgamated with a host of non-energy datasets.

Example: A multi-story campus building in Bethesda, MD was retrofit with DDCs to replace above grid pneumatic Variable Air Volume terminal unit actuators (VAVs) supplied by a central campus compressor station. On paper, the audit-identified project provided the customer with an excellent payback. Almost a decade after project completion, analysis of compressed air production and local consumption data indicated a surprisingly large volume still being consumed within the building. Further investigation revealed that most of the abandoned-in-place copper pneumatic air supply lines had been “sealed” using manual crimps and duct tape, allowing significant leakage 24 x 7 (8,760 hours/year). While our data analytics eventually identified the poor workmanship where multiple building energy audits had not, analysis concurrent with the project could have found it during the commissioning phase with little or no additional project cost and while the work was still under warranty.

Overconfidence in Energy Management and Control Systems

As Energy Management and Control Systems (EMCS) have proliferated, the pool of qualified operators has failed to keep pace. In addition, the systems are often operated by Operations and Maintenance (O&M) personnel who are judged by adherence to set points and other output parameters. The EMCS is programmed, then operated on autopilot until and unless an output problem becomes obvious (e.g., occupant hot or cold calls). As a result, systems may be performing optimally from an output perspective with little or no monitoring of energy input metrics. Data analysis provides visibility to system efficiencies and efficacies.

Over the decades, building automation has become commonplace; in most facilities, the majority of legacy systems and/or subcomponents have been replaced with DDCs. While pneumatic Building Control Systems (BCS) had notable advantages, there were many disadvantages. Not only do DDC systems provide superior control when optimally programmed, they also provide a rich data history. Many variations are promoted as being self-programming or self-commissioning, i.e., the EMCS is able to optimize system operation to achieve objectives while self-modifying its programming over time to minimize energy consumption. Unfortunately, such programming is not always optimal, and it does not take into account the uncanny human ability to unintentionally (or intentionally) misalign and defeat mechanical systems and programming.

Example: A critical document storage facility required tight temperature and humidity controls. The EMCS was operated by a contractor who was graded based on temperature and humidity metrics. All records indicated near perfect output tolerance tracking. Investigation of the system’s data historian (database) revealed unusual operation patterns in one of the massive Air Handling Units (AHUs). This observation led to finding a local, pencil eraser sized Dual Inline Pin (DIP) switch left in the “ON” position on the cooling side. Because the cooling DIP switch was “ON” and the heating side was on “AUTO”, the space would overcool, then the heating side would compensate. Even worse, the combination reduced relative humidity, inducing the EMCS to release steam into the airflow. The system was effectively in simultaneous heating/cooling/humidification mode. In total, we estimated that the single DIP switch was costing the facility over $100K/year in unnecessary energy charges. Remedied by a mere flip of that switch, the low cost/no cost ECM had a simple payback of twenty seconds.

Deregulation and Distributed Generation

Historically, utilities operated all three legs of the grid: generation, transmission, and local distribution. Deregulation split these sectors to promote competition. At the Federal level, enactment of the Public Utility Regulatory Policies Act (PURPA, 1978) had many effects, including the unanticipated requirement that utilities must purchase energy from non-utility generators (NUGs) and other non-traditional distributed generation (DG) sources. State regulators have since heaped innumerable requirements onto utilities, requiring demand response, net metering, and a host of other operating constructs. Most transmission grids are now monitored and controlled by Independent System Operators (ISOs) or Regional Transmission Organizations (RTOs). Moreover, Federal, State, and Local governments also provide incentives and penalties to promote renewable energy, energy surety, energy efficiency, and other noble goals.

While individually each of these actions are well meaning, the net result is blinding swirl of dynamic cost and operational variables that affect all parties: utility generators, large and small renewable generators, transmission operators, local distribution companies (LDCs), cooperatives, municipalities, and the customer base. Even if external intervention into utility operations and regulations ceased today, which is highly unlikely, the present state of affairs for many consumers would remain complex for years. Meanwhile, natural gas rates are currently at or near historic lows, but may rise substantially as the U.S. begins to export liquefied product (LNG). The good news is the environment is rich with opportunities for those with good analytics that provide actionable information.

From the electricity industry’s perspective, the optimal customer load profile is high demand with flat consumption. High demand multiplied by a flat load profile provides revenue and predictability across the electricity supply chain, from the LDC’s transformers and conductors to substation, transmission, and generation facilities upstream. The more closely an electricity consumer can approximate a flat load profile, the more likely they can benefit from lower cost rate options now and in the future. Also, the more flexible the load profile, the more likely a consumer will be able to benefit from almost any rate option. Last Monday, the Supreme Court ruled that FERC has the authority to regulate wholesale demand response programs[1], and those regulations and associated pricing will inevitably flow down to the retail level. At a minimum, load profile modification requires data and analytics that turn data into information, plus the mechanical and electrical ability to respond to price and other market signals.

There was a time when an energy manager needed only to understand relatively stable energy rates and the general consumption patterns of the facilities under her/his oversight. As DG and utility responses to DG expand, rates will become increasingly dynamic, time-based and complex. One result is potential savings for those who can find opportunities in the data, and higher costs for those who cannot.

Example: A major Army garrison was invoiced on an hourly, Real Time Pricing (RTP) rate. Detailed analysis of the rate and building consumption data revealed that the highest cost hours occurred during the coldest months of the year, based on a combination of ¢/kWh and facility electricity use. Prior to the analysis, the garrison had been operating with the incorrect assumption that cost peaks occurred during the hot summer months.

Decreasing Faith in the Grid

The first world has grown accustomed to a dependable power delivery system, but threats to the grid are real and increasing, as are the losses that accrue during power outages. The potential for cyber intrusion and disruption has received much attention lately, but a customer’s next brownout or blackout is still more likely to arise from congestion at choke points along the transmission grid, utility cutbacks to the tree-trimming budget or a suicidal squirrel. Given that full campus/full load backup generation and microgrids are expensive to install, maintain, and operate, more focused applications of backup power are usually the better choice. The question then becomes, what are the load profiles to be served, what are the anticipated emergency durations and how can loads be reduced to minimize generation equipment while extending the timeline?

Analysis of consumption and demand metrics from submeters and data extracted from EMCS and other energy intensive systems, combined with a detailed emergency management plan can answer these questions. Data analysis from scheduled system testing, particularly under mock emergency scenarios, can reveal flaws, unexpected power shortages, and additional opportunities to maximize benefits from emergency resources. Notably, very few operations demonstrate static load patterns over time. Changes in occupancy, equipment, mission, product or product mix, architectural layout, and dozens of other factors ensure that load profiles and activities deemed “critical” are eternally dynamic. Resultant data and information derived from prior scenarios should be leveraged to improve future results at minimum expense.

Question: How will you manage your facilities during an extended grid outage?

Proliferation of Advanced Meter Systems (AMS)

Dr. Timothy Walton, professor at James Madison University and a 24-year veteran as a CIA analyst, said to a gathering of national intelligence experts, “the old problem was not enough data; today it’s too much and of an extremely mixed quality.”[2] Energy professionals are slamming into the same problem. Smart meter deployments have grown exponentially over the past two decades, and Grand View Research projected that 2016 smart meter revenue (and presumably deployments) will be double those of 2012.[3] With the cost of non-volatile memory low and going lower, and cloud storage becoming a mainstream solution, long term data storage from those systems has proliferated. As the Internet of Things (IOT) begins to penetrate the HVAC and building systems market, today’s massive data stream will appear paltry by comparison. Energy managers of today must arm themselves for the onslaught of data analysis that will be necessary tomorrow.

Example: A single smart meter is typically programmed to report in 15-minute increments. Each report may contain kWh, kW (average), kW (peak), and power factor. Those four metrics pencil to 140,000 discrete data points in a year (kWh alone is over 35,000 data points). Multiply by the number of meters on a typical campus, add it to weather data, HVAC, and other sensor data and you can quickly overwhelm Excel and many traditional energy managers.

It’s Not Just Energy

We have reached the era where we can monitor and store almost infinite volumes of data. However, as the CIA has discovered, more data does not always mean more information. A good bumper sticker for this might be “Data Happens.” Leveraging that data and converting it into useful information requires effort and multiple information system skillsets.

Think of the information opportunities available from work order systems, parking garage data systems, building occupancy records, equipment repair tickets and costs – the list is long and growing. Savvy energy managers can superimpose these datasets onto energy demand and consumption to identify trends, patterns, outliers and opportunities. From the resulting information, they can then develop measures that not only save energy, but extend equipment life, reduce mechanical and maintenance personnel costs, and improve occupant and customer comfort and productivity.

Success is not measured by the volume of data we store, but by how much of it we convert to information and use to cure real world energy challenges.

[1] FERC order 745

[2] “The science of problems and solutions” James Madison University, December 2013

[3] Smart Meters Market Analysis By Application (Residential, Commercial, Industrial) And Segment Forecasts To 2020, Grand View Research, Feb 2014, ISBN 978-1-68038-074-3

  • LikeSix Reasons Data Analytics is the Next Big Thing in Facility Energy Management

  • Comment

  • ShareShare Six Reasons Data Analytics is the Next Big Thing in Facility Energy Management

Report this

Tagged in:

  • facilities management

  • energy management

  • analytics

FollowMiles D. Smith

Miles D. Smith

Principal Analyst - Energy at CALIBRE Systems

  • 1 post

3 comments

Recommended

4mo

Chris Doerfler

Cofounder, 3DFS Software-Defined Power * Clean Tech Pragmatist * Electricity Realist * Energy…

I do not believe the amount of data is the problem, rather it is the quality of data that is the problem. Having too much low quality data is always going to be disruptive. This is bound to happen when the RMS methodology is used to acquire electrical parameter data. Precision electrical data and control is the way to go here. All the data is accurate and therefore valuable… See more

LikeReply

4mo

Ranganathan Krishnan

Government Registered Valuer - Plant & Machinery at

Well said. I am seeing a lot of data excess in my practice. Many useful common sense reasoning gets buried in mass of data. In my opinion, Energy Management by Engineering Dept usually results in a stand off between Operations & Maintenance. Trust deficit is rampant - resulting over specifying by operations and covert under supplies by Utilities. Fiance Audit has come to be… See more

LikeReply

1

4mo

Miles D. Smith

Principal Analyst - Energy at CALIBRE Systems

Thank you, and I agree with you that that friction often exists between energy managers and the O&M team. But in my experience, the reasons usually have more to do with mission than data. O&M leadership is tasked with minimizing occupant complaints and minimizing expenses for equipment and service calls. The energy manager is tasked with minimizing energy and water consumpt… See more

LikeReply


Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page