By Chris Griffin, Professional Services Director.

In Romeo & Juliet, William Shakespeare wrote, “What’s in a name? That which we call a rose by any other name would smell as sweet.” Well, when it comes to petroleum data management, names are all about numbers and, most importantly, well identification numbers. It appears that wells that smell the same may not actually be the same well, unless of course it’s been re-drilled.

Recently, we’ve seen an increase in the number of customers asking how we deal with well identification.  It seems to be an on-going issue as more Data Managers aim for improved data quality.  Enhanced data quality rules and new data stores are driving this requirement.

This week, the PPDM organisation launched a revised standard, API Number 2013, to ensure that wells in the United States are identified correctly.  The table below shows an extract definition from their news release:

PPDM API 2013

Click here to view the news release

While these revised standards are very welcome, non-standard identifiers continue to be a significant information management challenge.  There is also the issue of tackling well identification outside the United States.

We address these challenges for our customers in a number of ways:

In Transit

Validation of the data can be carried out using data quality rules, prior to loading into data stores. These deployments can be nested into existing architecture to provide seamless data validation.

In Store

The existing data can be validated in store to ensure that the current data is fit for purpose, and identify areas which may have degraded over time. An area such as duplicate wells is only one of a number of areas that can be validated. The duplicate wells may require matching on other attributes which can be weighted and limited by percentage to ensure a valid match. The attributes used for this can include attributes such as: state, county, latitude, longitude, spud date, field etc.

This in-store validation can also include cross system validation to ensure that the information in one or more systems is correct. This is common when we come to migrate data to a new target system, as the target usually requires information from a number of database sources and, of course, everyone’s favourite: the spreadsheet. This source data must be verified in its own domain and between the systems.  It may also require lookup checks against the target reference data, prior to being classed as fit for purpose and hence fit for migration.

Further reading

    Did you like this article? Get our new articles in your inbox with our occasional email newsletter.

    We will never share your details with anyone else, except that your data will be processed securely by EmailOctopus (https://emailoctopus.com/) in a third country, and you can unsubscribe with one click at any time. Our privacy policy: https://www.etlsolutions.com/privacy-policy/.

    data migration planning guide