The Evolution of Marketing and knowing your Customer(A sneak pick into your browser today)

Pritam Pratik Agrawal
5 min readMay 2, 2021

This topic covers how businesses have improved over time in collecting what their customer needs or to be precise “want to have”. It also covers the modern technical aspects of how much customer data is important for a business to prosper.

The Ancient Personalised Touch

Around 16th century, if people wanted clothes with beautiful embroidery or having a fabric better than the most with a personalised touch, they would go to a professional who would provide that personalised service to their customer and that usually took weeks or months as customer used to give time for their desired specifications and it came with high premium price tag.

Change with the Industrial Revolution

Then came the industrial revolution from late 18th Century to early 19th Century, having big factories, huge assembly line with automations to mass produce goods with a much lower price point but also customer losing the ability to give a personalised touch during production of the goods.

Given the reach of the produced goods were limited to a specific region because of packaging, travel constraints, it couldn’t reach out to a larger audience. So they were usually sold at nearby shops and stored in small warehouses thereby limiting the choice of the customer.

Internet came as a boon for the mass

After the fast growing internet reached every outlets or shops (people were not having the facility yet as internet could be only accessed by institutions as it came with a high price initially) these companies can now get a proper feedback as to what their customer like or dislike and hence it opened doors for more choices than ever to the customer. These brought more power to the consumer.

After the industrial revolution, there were many competitor in the market of a particular product. The main agenda then was longevity and better durability. Companies then crossed ocean to trade their products in return for customer loyalty. But did it really happen…?

As the competition was to make a product with higher lifespan, this increased the interval of repeat purchases and companies fell to it. Then came the policy of “Planned Obsolescence”. Under this policy, companies planned their product by designing with a limited useful life so that it would look obsolete after a particular period upon which it will fail to function properly or undergo decremental functionality. Soon this strategy reached the entire globe making us a part of it, unwillingly.

But for this strategy to have its effect, companies needed to make products which the customer would prefer to have upon the competition during the lifecycle of the product before they finally kill its production line to make something new to reach out to their customers. But can it all happen spontaneously.. Definitely not…

What companies rely upon the most is consumers data. This data is of utmost importance to companies and they will outright spend million or even billions of dollars to have it and maintain it.

Companies now smartly push tiny upgrades or add gimmicks to their products as per customer requirements to maintain their customer community. But how do they have such huge amount of customer data?

Evolution of Data Analytics and Machine Learning Algorithms

Whenever we hit a particular URL or scroll through a product catalogue of any company, what we give away is our data. The amount of time one spends and the number of hits to the url in a particular product catalogue reveals how much one needs or “want to have” that product.

How it is done?

I’m currently learning Google Cloud Platform and have some instances running in the cloud hosting some application. These applications are open to public internet and i have wired up logging to my instances just to store logs so that if anything goes down when a user is using my application, i can have details of the events and can respond accordingly.

Now, what does this log have? Let me provide a snap of logs which got generated when i tried accessing the application from a public internet route.

If you see above data payload, it logs the principal email which is used to connect to the URL via logins or even logged in browsers, the Caller IP which is none other than one’s public IP Address of the device used, adding to it, it even logs the version of search engine one used under callerSuppliedUserAgent.

Now, Cloud Logging keeps data for 30 days before getting purged. But even a second will suffice to drive out all important data out of it.

From this Logging, Cloud Pub/Sub service can be attached and logging data flows into it routing terabytes of realtime data to Cloud Dataflow which has information about the page the user is viewing, the time spent on that page, the details of user, the personalised views of user for particular products. On this data Machine Learning algorithms are applied and data is stored in BigQuery or BigTable tables. Now Companies pay huge sum to access these tables, all they need is to fire the number of hits to a particular html webpage or the time spent on a particular website and from these tables, google and also Amazon are having a good fortune 😉. Also not to forget these companies have shared user details mentioned above.

Using this data they can now send you advertisements popping up in your browser as one has already accepted cookies when they had hit the URL in the first place. Thereby companies now can directly know what their customer wants or browse through as compared to early years where they had to rely on outlets/stores feedback which probably could have been forged.

This data is also largely misused by some companies to pre-determine maybe some global events which otherwise is a private affair, or even on an individual level breaching one’s privacy. And this data is available for huge sum in the dark web.

--

--

No responses yet