When it comes to AI and automated machine learning, more data is good — location data is even better.

At Data Con LA 2019, I had the pleasure of co-presenting a tutorial session with Pitney Bowes Technical Director Dan Kernaghan. We told an audience of data analysts and budding data scientists about the evolution of location data for big data and how location intelligence can add significant and new value to a wide range of data science and machine learning business use cases.

Speeding model runs by using pre-processed data

What Pitney Bowes has done is take care of the heavy lifting of processing GIS-based data so that comes ready to be used with machine learning algorithms. Through a process called reverse geocoding, locations expressed as latitude/longitude are converted to addresses, dramatically reducing the time it takes to prepare the data for analysis.

With this approach, each address is then associated with a unique and persistent identifier, the pbKey™, and put into a plain text file along with 9,100 attributes associated with that address. Depending on your use case, then, you can enrich your analysis with subsets of this information, such as crime data, fire or flood risk, building details, mortgage information, and demographics like median household income, age or purchasing power.  

Surfacing predictors of summer rental demand: location-based attributes

For Data Con LA, we designed a use case that we could enrich with location data: a machine learning model to predict summer revenue for a fictional rental property in Boston. We started with “first person” data on 1,070 rental listings in greater Boston that we sourced from an online property booking service. That data included attributes about the properties themselves (type, number of bathrooms/bedrooms, text description, etc.), the hosts, and summer booking history.

Then we layered in location data from Pitney Bowes for each rental property, based on its address: distance to nearest public transit, geodemographics (CAMEO), financial stress of city block, population of city block, and the like.

Not surprisingly, the previous year’s summer booking and scores based on the description ranked as the most important features of a property. However, it was unexpected that distance to the nearest airport ranked third in importance. Other location-based features that surfaced as important predictors of summer demand included distance to Amtrak stations, highway exits and MBTA stations; block population and density measures; and block socio-economic measures.

By adding location data to our model, we increased the accuracy of our prediction of how frequently “our” property would be rented. Predicting that future is an important outcome, but more important is determining what we can do to change future results. In this scenario, we can change the price, for example, and rerun the model until we find the combination of price and number of days rented that we need to meet our revenue objective.

Building effective use cases for data science

As a Business Partner since 2015, Ironside Group often incorporates Pitney Bowes data — both pbKey flat-file data and traditional GIS-based datasets like geofences — into customized data science solutions built to help companies grow revenue, maximize efficiency, or understand and minimize risk. Here are some examples of use cases that incorporate some element of location-based data into the model design.

Retail loss prevention. A retailer wanting to analyze shortages, cash loss and safety risks expected that store location would be a strong predictor of losses or credit card fraud. However, models using historical store data and third-party crime risk data found that crime in the area was not a predictor of losses. Instead, the degree of manager training in loss prevention was the most significant predictor — a finding that influenced both store location decisions and investments in employee training programs.

Predictive policing. A city police department wanted to a data-driven, data science-based approach to complementing its fledgling “hot spot” policing system. The solution leverages historical crime incident data combined with weather data to produce an accurate crime forecast for each patrol shift. Patrol officers are deployed in real time to “hot spots” via a map-based mobile app. Over a 20-week study, the department saw a 43% reduction in targeted crime types.

Maximize efficiencies for utilities demand forecasting. A large natural gas and electricity utilities provider needed a better way to anticipate demand in different areas of their network to avoid supply problems and service gaps. The predictive analytics platform developed for the utility uses cleaned and transformed first-party data from over 40 different geographic points of delivery, enriched with geographic and weather data to improve the model’s predictions of demand. The result is a forecasting platform that triggers alerts automatically and allows proactive energy supply adjustments based on predictive trends.

About Ironside Group and Pitney Bowes

Ironside Group was founded in 1999 as an enterprise data and analytics solution provider and system integrator. Our data science practice is built on helping clients to organize, enrich, report and predict outcomes with data. Our partnership and collaboration with Pitney Bowes lead to client successes as we combine our use case-based approach to data science with Pitney Bowes data sets and tools.

In today’s “Big Data” era, a lot of data, in volume and variety, is being continuously generated across various channels within an enterprise and in the Cloud. To drive exploratory analysis and make accurate predictions, we need to connect, collate, and consume all of this data to make clean, consistent data easily and quickly available to analysts and data scientists.

Read more

Gathering data is an essential step before performing analysis in Power BI Desktop. The tool allows users to connect with many different data sources, such as traditional or cloud databases, text files, big data, and live streams. Among these sources are SQL Server Analysis Services (SSAS) Tabular models, which are widely leveraged for enterprise solutions. Why? Let’s dive into Tabular Modeling to learn more.

Read more

In the first installment of this series, we discussed the origins of Mode 2 Analytics, and in the second installment we focused on how to enable this capability in your organization using Cognos. Now that we’ve learned all about how Mode 2 works, let’s walk through a sample use case that highlights the Bimodal Analytics Lifecycle as well as the technical capabilities of Cognos Analytics and how they fit together.

In this example, you are the manager of a Healthcare Call Center system that is comprised of seven (7) regional centers across the country. Each call center handles contact (phone calls and online chats) from the customers that are located within the states that make up those regions.

Read more

When you think about the different ways that data gets used in your company, what comes to mind?

You surely have some executive dashboards, and some quarterly reports. There might be a reporting portal containing everything that IT created for anyone within the past decade.

Read more

At Ironside, we believe that data science is a team sport, and should be accessible to and enable as many players as possible. We work with clients on a regular basis to make data science accessible within their organization. But we also do this within our own company. Meet Tom Clancy – hear about his journey and what he has learned along the way.

Read more

When defining or assessing a Data & Analytics Strategy, Ironside leverages a proven framework of understanding the current state and comparing it to a desirable future state with a focus on six key areas, or pillars.

Read more

Last spring, I had the opportunity to attend a local analytics conference with Dr. Claudia Imhoff as the keynote speaker. As she got on stage to begin her presentation, she started out by making a statement along the lines of “For every time the phrase ‘Big Data’ is mentioned today, we will all take a shot during happy hour.” 

(Tip: Don’t try that for this article.)

Read more

In previous releases of Cognos Analytics, we have seen a trend of integrating many of the features of metadata modeling in Framework Manager into the Cognos Analytics interface. This trend is continuing with new or improved modeling capabilities being incorporated into Cognos Analytics 11.1 Data Modules.

Read more

Customer segmentation is defined as “the process of dividing customers into groups based on common characteristics so companies can market to each group effectively and appropriately.” By using the correct attributes to define the customer segment, it allows companies to identify the right customers for targeted and relevant offers. Those who successfully define and maintain customer segmentation can derive a competitive advantage from the implementation by improving customer experience.

Read more