Recently in Address Standardization Category

Melissa provides numerous APIs to handle the standardization, cleansing and verification of various data elements such as addresses, names, phones and emails. Often times we get asked by our customers as to what are some ways to improve the speed when processing records using our APIs. Below are some performance tips to consider when implementing our APIs:



The type of medium where the data files are stored may have an impact on processing speed. We recommend the use of solid state drives to store our data files as they have faster seek & read times compared to spinning disk drives.

We have also sometimes seen clients store the data files on a network share which we definitely don't recommend. Typically, trying to access the files over a network introduces latency which impacts the processing speed when blocks of data need to be fetched quickly. Data files, therefore, should often be stored locally on the machine.



The amount of memory available and speed can also affect processing. While using the APIs, once data is read from the hard drive, the data is cached and stored temporarily into memory in case the data needs to be accessed again shortly.


A simple example below, we have a list of phone numbers that were verified using our Phone Object API along with the times in milliseconds indicating the amount of time to verify the phone number.


When the Phone Object API encounters a phone number in a new area code, there are spikes in the verify times as the Phone Object now has to go back and fetch a new block from the data files stored on a hard disk and cache it into memory. The more memory available on the system, the more that can be cached into memory as the API reads more blocks from the data files on the disk. And, as discussed in the previous section, having a faster hard drive will help keep those disk read times low when those data file reads occur.



Processors with more than a single core are now common these days. If multiple cores are available on the system, we highly recommend that developers take advantage of that. When multithreading with our APIs, we suggest having each thread contain its own object instantiation for our APIs. 

For example, if you have 8 cores, you may want to create 8 threads with our API instantiated 8 times: once per each thread. Ideally, you would want to create a pool of threads that have our API instantiated already, and therefore ready for processing. If you keep reinitializing/instantiating our API, that will introduce some overhead in the processing.


The graph and table below shows some multithreaded testing of our own with our Global Address Object® API with UK addresses and, as shown, there as substantial speed increases that can be obtained through multithreading:


Melissa's Improvements in Dynamics CRM

| No Comments | No TrackBacks

Dirty data, in all forms, is bad for business. Here at Melissa, our primary concern is cleansing it from all of your platforms, including Microsoft Dynamics® CRM. Melissa currently offers many solutions for Dynamics CRM in order to combat problems with bad data.


We offer the Personator® solution in order to cleanse and enrich your U.S. and Canadian data. We offer the Global Verify solution to correct and verify your addresses, phone numbers, names, and email addresses on an international level. Soon, we will release the Express Entry® solution in order to prevent bad data from entering your environment. As we strive to offer you the best solutions, Melissa constantly seeks to improve its solutions to better suit your needs.


Coming in a future update, we will offer the following new features to our Express Entry service:

•          Personator Workflows

•          Reverse Lookup for Express Entry

•          Express Entry Integration into Global Verify

Personator Workflows

Dynamics CRM is utilized in many different ways in the business world. The creation of contact, account, and lead records is handled through many different environments that may not leverage the standard form. In addition, sometimes users may forget to use our services to cleanse and correct information before saving and storing a record. 

To address these issues, we have created workflows for the Personator solution for the currently supported out-of-box entities. These workflows can be activated to leverage our Personator service on records automatically, such as upon creation of a new record. This will allow users to create records from a different environment, such as a separate portal, to have their information automatically validated through our workflows.


Reverse Lookups for Express Entry

Different users enter address information in different orders. With Dynamics CRM's ability to customize forms, it is apparent that not everyone will start by entering a street address. With our new feature, Reverse Lookups, users can enter information starting from the most general piece of information down to the most specific. For example, now a user, after entering his or her default country, can begin by entering the postal code to determine the city and state of the particular record. After filling out these fields, the user can then enter in the street address and select from a list of addresses only in that particular city, state, and postal code.


Express Entry Integration into Global Verify

Many customers require different methods of verification. In order to address these concerns, we have integrated our Express Entry service into our Global Verify solution. Now, you can utilize the Express Entry service to autocomplete addresses when entering data as well as verify phone and email with the click of a button.

How to Do It All with Melissa

| No Comments | No TrackBacks

With Melissa, you can do it all - see for yourself with the brand new Solutions Catalog. This catalog showcases products to transform your people data (names, addresses, emails, phone numbers) into accurate, actionable insight. Our products are in the Cloud or available via easy plugins and APIs. We provide solutions to power Know Your Customer initiatives, improve mail deliverability and response, drive sales, clean and match data, and boost ROI.


Specific solutions include:

·         Cleaning, matching & enriching data
·         Creating a 360 degree profile of every customer
·         Finding more customers like your best ones with lookalike profiling
·         Integrating data from any source, at any time

Other highlights include: global address autocompletion; mobile phone verification; real-time email address ping; a new customer management platform; as well as info on a wealth of data append and mailing list services.


Download the catalog now:


MAILERS Online: On Demand & In the Cloud Presorting

| No Comments | No TrackBacks

Say hello to a no-contract, end-to-end mailing solution that's in the cloud and easy to use. MAILERS Online is a one-stop-shop for all your mailing and presort needs, where you can upload your text or Excel® files to our server and let Melissa take it from there. We'll presort your list, based on your selected parameters, to get you the lowest postage rates! MAILERS Online is a flexible, on-demand, software as a service (SaaS).


There is nothing to install and MAILERS Online never requires disks or updates. Use on a list-by-list basis--with no contract or subscription, you're never locked in. Use NCOA to locate customers who've moved; clean and verify address data to reduce undeliverable-as-addressed mail; and append missing data like ZIP® Codes, carrier routes, and suite numbers for stronger targeting and more efficient processing and delivery.


Find out what can MAILERS Online do for you today:

Melissa Data is set to attend Dreamforce #DF15 in San Francisco and will showcase its Listware for Salesforce all-in-one data cleansing app. Users are invited to download the app, submit a review to receive a free $15 Starbucks card, and be entered into a drawing for an Apple Watch. Come see us at North Hall Kiosk 1112!

Click here for more info!

Online Merchants Can Now Verify and Correct Worldwide Customer Addresses; Better Customer Data Reduces Shipping Costs and
Eliminates Online Fraud

Rancho Santa Margarita, CALIF. - June 25, 2014 - Melissa Data, a leading provider of contact data quality and integration solutions, today announced a new data quality plug-in for users of Shopware, ensuring powerful contact data quality for e-commerce applications. The Shopware plug-in enables online merchants to verify and correct addresses as they are entered, validating information from more than 240 countries and territories in real-time. The plug-in is based on Melissa Data's Global Address Verification capabilities and was developed by NetzTurm GmbH, a service provider for the implementation of online stores, search engine optimization and web design. The two firms are now engaged in an exclusive partnership for data quality in Shopware-based ecommerce sites.

Online merchants can instantly validate and cleanse international addresses, correcting and standardizing them into the official postal format of each geographic locale, and adding missing components such as postal codes or region. Data fields are populated automatically using an efficient type-ahead feature that auto-completes entries after the first few digits or letters have been entered. Data entry is not only correct, but also up to 50 percent faster with reduced keystrokes, improving the customer's online experience and increasing website conversions.

"With data quality integrated into Shopware, online shoppers have only one step between shopping cart and checkout, which can drastically reduce the drop-off rate during the checkout process. Bad data is eliminated at the source, improving the customer's overall shopping experience with faster, easier order entry and fewer delivery errors," said Gary Van Roekel, COO, Melissa Data. "For the online merchant, accurate shipping and billing information is everything - reducing online fraud, enabling smooth operations and ensuring the best overall customer service."

In addition to ensuring lower shipping costs, timely deliveries and lower return rates, better customer data also improves the success rate of merchants' direct marketing activities. The solution also supports online shops in reaching new markets and customer groups. Both Melissa Data and NetzTurm also plan to develop further joint plug-in technologies for other e-commerce platforms in the future.

Click here to download the Shopware data quality plug-in, available as a stand-alone product, or bundled with "One-Page-Checkout" from NetzTurm. The plug-in is available at a net price of 759 € including 5,000 queries valid for one year and the auto-completion feature for international addresses.

News Release Library

Content Standards for Data Matching and Record Linkage

| No Comments | No TrackBacks
By David Loshin

As I suggested in my last post, applying parsing and standardization to normalize data value structure will reduce complexity for exact matching. But what happens if there are errors in the values themselves?

Fortunately, the same methods of parsing and standardization can be used for the content itself. This can address the types of issues I noted in the first post of this series, in which someone entering data about me would have used a nickname such as "Dave" instead of "David."

By introducing a set of rules for pattern recognition, we can organize a number of transformations from an unacceptable value into one that is more acceptable or standardized. Mapping abbreviations and acronyms to fully spelled out words, eliminating punctuation, even reordering letters in words to attempt to correct misspellings - all of these can be accomplished by parsing the values, looking for patterns that the value matches, and then applying a transformation or standardization rule.

In essence, we can create a two-phased standardization process that first attempts to correct the content and then attempts to normalize the structure. Applying these same rules to all data sets results in a standard representation of all the records, which reduces the effort in trying to perform the exact matching.

Yet this process may still allow variance to remain, and for that we have some other algorithms that I will touch upon in upcoming posts.

By David Loshin

In my last few posts, I discussed how structural differences impact the ability to search and match records across different data sets. Fortunately, most data quality tool suites use integrated parsing and standardization algorithms to map structures together.

As long as there is some standard representation, we should be able to come up with a set of rules that can help to rearrange the words in a data value to match that standard.

As an example, we can look at person names (for simplicity, let's focus on name formats common to the United States). The general convention is that people have three names - a first name, a middle name, and a surname. Yet even limiting our scope to just these components (that is, we are ignoring titles, generationals, and other prefixes and suffixes), there is a wide range of variance for representing the name. Here are some examples, using my own name:

• Howard David Loshin
• Howard D Loshin
• Howard D. Loshin
• David Loshin
• Howard Loshin
• H David Loshin
• H. David Loshin
• H D Loshin
• H. D. Loshin
• Loshin, Howard D
• Loshin, Howard D.
• Loshin, H David
• Loshin, H. David
• Loshin, H D
• Loshin, H. D.

There are different versions depending on whether you use abbreviations or full names, punctuation, and the order of the terms. A good parsing engine can be configured with the different patterns and will be able to identify each piece of a name string.

The next piece is standardization: taking the pieces and rearranging them into a desired order. The example might be taking a string of the form "last_name, first_name, initial" and transforming that into the form "first_name, initial, last_name" as a standardized or normalized representation. Using a normalized representation will simplify the comparison process for data matching and record linkage.

Structural Differences and Data Matching

| No Comments | No TrackBacks
By David Loshin

Data matching is easy when the values are exact, but there are different types of variation that complicate matters. Let's start at the foundation: structural differences in the ways that two data sets represent the same concepts. For example, early application systems used data files that were relatively "wide," capturing a lot of information in each record, but with a lot of duplication.

More modern systems use a relational structure that segregates unique attributes associated with each data concept - attributes about an individual are stored in one data table, and those records are linked to other tables containing telephone numbers, street addresses, and other contact data.

Transaction records refer back to the individual records, which reduces the duplication in the transaction log tables.

The differences are largely in the representation - the older system might have a field for a name, a field for an address, perhaps a field for a telephone number, and the newer system might break up the name field into a first name, middle name, and last name, the address into fields for street, city, state, and ZIP code, and a telephone number into fields for area code and exchange/line number.

These structural differences become a barrier when performing records searches and matching. The record structures are incompatible: different number of fields, different field names, and different precision in what is stored.

This is the first opportunity to consider standardization: if structural differences affect the ability to compare a record in one data set to records in another data set, then applying some standards to normalize the data across the data sets will remove that barrier. More on structural standardization in my next post.

In a Global Economy, a Global Solution is Vital

| No Comments | No TrackBacks
By Patrick Bayne
Data Quality Tools Software Engineer

Heidelberglaan 8
3584 CS Utrecht

If you were given this address, how would you know it was valid? Is it formatted correctly? How long will it take you to verify? For years businesses have understood a need for address validating solutions, because clean, accurate data is essential. Without accurate and consistent data, customers are lost due to missed mailings, synchronization across servers is difficult, and reports become inaccurate - all adding to costs and missed opportunities for your business.

In an ever-emerging, ever-increasing global economy, there is a strong need for a global address solution that is not only accurate but cost-effective. Melissa Data, a long time value leader in address cleansing solutions, is proud to announce the launch of its Global Address Verification Web Service. Now it's easier than ever to cleanse, validate, and standardize your international data - allowing you to make confident business decisions, execute accurate mailings and shipping, plus maintain contact with your customers.

The Global Address Verification Web Service easily integrates into your systems. Because of the nature of a Web service, any platform that can communicate with the Web is open to use the service. And the Web service supports a variety of popular protocols - SOAP, XML, JSON, REST and Forms. The multiplatform, multi-Web protocol openness of the service allows simple connection to any point of entry, or batch system.

Enhanced Global Address Verification

What if you only had the street address and not the actual country name? Global Address Verification Web Service's unrestrictive address cleansing capabilities will append the name of the country to ensure deliverability.

The solution automatically puts components into the correct address field, making applications that process location data more accurate - bringing more value to your data. The process is also resilient to erroneous, non-address data. The solution can translate addresses between many languages and can format address to a country's local postal standards.

Result Codes for the Highest Verification Level

While it's easy to set up the Web Service and make calls to it, you will also know exactly what changed in the submitted address through results codes. There are three types of codes to detail the data quality: AE (Error) codes signify missing or invalid information; AC (Change) codes indicate address pieces that have been changed; and new AV (Verification) codes inform you as to how strongly the address was verified according to the reference data we have for a particular country.

Quantifying the quality of a match is easy through the results codes. An AV followed by a 2 designated that the record was matched to the highest level possible according to the reference data available. An AV followed by a 1 denotes that the address is partially verified, but not to the highest level possible. The number following, which is 1 to 5, indicates the level to which the address was verified. Think of it as a sliding scale.

For countries like the France, Finland and Germany, where there is data up to the delivery point, you will know that there was a full verification on an address when there is a result code of AV24 or AV25. A country such as American Samoa, where reference data is up to locality, would be fully verified with a results code of AV22. Any missing or inaccurate information would change the results to be partially verified. The user-friendly results codes will help you make more informed decisions about your data.

The Global Address Verification Web Service will open new doors to cleanse and validate your international data. The service is operating system and programming language neutral, allowing you to use it wherever you desire.

All data is maintained by Melissa Data, reassuring you that it is up-to-date and relieving you of the stress of any maintenance. You will have confidence that your data is accurate and be able to make informed, effective business decisions based on your data, increasing your productivity. So the next time you see, "Heidelberglaan 8, 3584 CS Utrecht," you will know how to quickly and assuredly verify that it is a valid address.

--- Patrick Bayne is a data quality tools software engineer at Melissa Data.

For a free trial of the Global Address Verification Web Service, click here.