Warning: Undefined array key "url" in /var/www/html/wp-content/plugins/wpforms-lite/src/Forms/IconChoices.php on line 127

Warning: Undefined array key "path" in /var/www/html/wp-content/plugins/wpforms-lite/src/Forms/IconChoices.php on line 128

Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/wp-content/plugins/wpforms-lite/src/Forms/IconChoices.php:127) in /var/www/html/wp-includes/feed-rss2.php on line 8
News – C-Centric https://www.ccentric.co.uk Creating Smarter Interactions Tue, 16 Sep 2025 13:16:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 C-Centric nominated for the “Most Innovative use of AI” award by Data IQ judges. https://www.ccentric.co.uk/c-centric-nominated-for-the-most-innovative-use-of-ai-award-by-data-iq-judges/ Thu, 11 Sep 2025 13:32:01 +0000 https://staging.ccentric.co.uk/?p=3612

The Data IQ judges have nominated C-Centric for “Most Innovative use of AI”  award. This is for the development of agentic BOTs that advise and recommend options to consumers on utility switching.  The agentic BOTs ingest consumer browsing behaviour , socio demographic attributes & home value data to help with conversational recommendations. The agentic Bots then transfer the consumer to the supplier E-Commerce check out with selected product details populated. In addition it initiates and transfer the conversation to the supplier chat agents to complete the transaction. No conversation contest is lost. In A/B test this has increase conversion by up to 89%.  

Share
]]>
Why You Should Re-Platform Your Customer Marketing Database To Snowflake https://www.ccentric.co.uk/why-you-should-re-platform-your-customer-marketing-database-to-snowflake/ Thu, 11 Sep 2025 13:03:01 +0000 https://staging.ccentric.co.uk/?p=3608

We are often asked by CDOs and CMOs to document the business case for moving their marketing data platform to Snowflake. In this article I explore the key reasons why you should actively consider re-platforming your customer marketing database onto Snowflake.

 

Enhanced Performance and Speed

Marketing databases often struggle with processing and analysing large datasets, leading to bottlenecks and slower query response times. Snowflake processes queries using massively parallel processing (MPP) compute clusters. By providing elastic access to scaled compute at low cost it addresses some common speed and performance challenges in customer management.

 

There are 5 areas we find as the most transformative for clients:-

 

  • Campaign execution & performance
  • Cost
  • Real time personalisation
  • Model refresh
  • Self- serve customer and campaign MI.

 

 

Improved Campaign management

 

By re-platforming your customer marketing database to Snowflake you get a quantum leap in performance and speed when running campaigns.

This is pretty critical as today’s customer marketing campaigns run off significantly larger data volumes, more complex data structure and wider range of data types than even 2 years ago. Campaign selections over large data sets that would have timed out or taken hours – now will run in seconds on Snowflake.

 

Snowflake’s support for diverse data types and semi-structured data empowers your marketing team with the capability and speed to harness the full spectrum of customer information. This includes unstructured data from devices, messaging, social media, clickstream data and more.  We have helped clients build high performing campaigns that query customer service chat, email & voice call transcript directly from nested JSON data in Snowflake. Its ability to function as a data lake query engine provides great scope to incorporate additional data for campaigns at speed.

 

Cost

What often surprises clients is the cost benefits for such increased performance. Usually there is line straight cost saving at time of migration –  especially of you have been on a managed service with an outsourcer.

Key FD friendly features

  • Pay for what you use & avoid over- provisioning – Snowflake’s pricing model allows you to only pay for the resources they use, which can save money in the long run. You can automatically scale up or down their usage based on their needs, so there’s no risk of overprovisioning.
  • Snowflake offers cost and workload optimization features help you to enforce cost control, and discover resources that need fine-tuning.
  • Eliminate legacy software license fees

 

Many of the major Martech tool vendors run directly off the Snowflake data engine. 

For example we often set up Adobe Campaign to run directly off the Snowflake data engine. In addition to significant speed improvement we also eliminate the wasted effort, time and latency of data wrangling into campaign marts.

Most of the leading technologies in the modern martech and adtech space will talk directly to the Snowflake data engine.  These Martech tool vendors makers are driven by the need to get closer to the unified customer data store, equipped with native processing capabilities. This also means that you are not locked into any tool vendor. You can plug and play best-of-breed tools off your centralised data spine. This is why many CMOs favour going down a composable CDP route instead of a silo packaged CDP. By working with tools that are closer to the data, you’re optimizing the expensive time of your data engineers. You’re cutting the time and expense of getting your campaigns right. Marketers shouldn’t be hampered by data friction. Instead they should be working on what they love to do: delivering differentiated campaigns with optimal speed, accuracy, and agility.

 

Matching

Snowflake supports non-SQL code within Snowpark for complex transformations and simplifying data integration. We use our data matching & identity resolution platform AudiencePlus within Snowflake to link records and consolidate diverse datasets into a single source of truth. This breaks down data silos, enabling a holistic view of customer interactions and preferences. It uses AI to recognise partialand misspelt data from names, address , emails and contact numbers. It utilises a UK data universe to append data fields (forename, DoB, email, mobile) to assist matching algorithms and record linking . This provides a tunable matching environment for both deterministic and probabilistic matching.

  • Link accounts for one person
  • Create household views
  • Link a customer’s digital activity across Web and app
  • Bridge anonymous to known IDs
  • Match and overlay external commercial data by name & address & other match keys
  • Increase onboarding match rates with Google/Facebook & other ad platforms

 

Personalization data layer  

With a single customer view, you can drive great value through both digital and inbound calls. Snowflake supports Hybrid tables enabling high speed row level data look-ups as well as storage optimised for analytical queries. This means you don’t need multiple data technologies – you can drive on-site personalisation, customer identification and NBA within Snowflake without the overhead of synching multiple data marts with all the associated latency.

 

Machine learning and analytics

Snowflake’s support for machine learning and data science workflows further amplifies the value of your customer data, enabling predictive modelling, customer segmentation and propensity scoring. The ability to leverage advanced analytics within Snowflake’s environment eliminates the need for data movement, streamlining your analytics workflows and accelerating time-to-insight.

The game changer is to work across large volumes of data with rich intent signals and refresh models at great velocity to drive critical time sensitive customer campaigns and personalisation.

We have used these capabilities in telecommunication clients to build customer churn radars. Our models were able to detect customer disengagement from hundreds of millions of rows of PAYG transaction usage data.  We were able to trigger push messaging campaigns for data vouchers for those at risk of lapsing. These models driven by twinkling big data feeds can be game-changing campaigns in churn prevention and cross sell.

 

Gen AI

A key new feature of Snowflake is its role as a vector store with Gen AI capabilities

We classify and label inbound customer voice & chat transcripts using Cortex functionality. This allows us to detect multiple signals within conversations – call drivers, dissatisfaction, underlying cause of complaint, vulnerability and cross sell opportunities. Our models use this data with additional customer variables for highly effective triggered churn and cross sell customer campaigns.

  • Speed- by using Snowflake and Arctic LLM we can achieve faster throughput at far less cost than many other foundation models.
  • More accurate – trained industry specific classifier models
  • Data security – no 3rd party data transfer – data stays in the Snowflake environment – no need for additional tech clutter and the complication of data transfer.

 

Streamlined Data Management and Collaboration

The migration to Snowflake redefines the landscape of collaboration within your marketing organisation. Snowflake’s cloud-based data platform offers a unified data spine and environment for storing, processing, and sharing all customer data. Secure data-sharing facilitates seamless collaboration and knowledge sharing across teams like advertising, operations and data sciences.  We have been able to help clients to securely share select datasets with external affinity marketing partners, agencies, or media vendors, fostering collaborative marketing initiatives and improved media ROI.  We have helped companies run some great affinity partnerships with directly measurable sales by match-back through secure data share. This frictionless data sharing empowers your business to leverage external expertise and enrich your customer insights, driving innovation and differentiation in your marketing strategies.

 

MI and Campaign analysis

Cloud analytics has emerged as a revolutionary approach to MI data self-serve through tools like Tableau, Power BI etc.  We commonly are asked to pair the Snowflake’s cloud data warehouse with BI tools. Customer operations dashboards and campaign analysis is one of the most challenging areas for BI tools. These projects have been traditionally plagued with slow speeds, incomplete data and conflicting results from source system reports. Snowflakes provides users with speed on large volumes of data through elastic compute and support for both structured and unstructured data. Net result – is more timely and consistent metrics across a wider set of data. One key feature we love is Snowflake’s Time Travel feature allows you to analyse data at different points in time, which is invaluable for historical trend analysis. This can provide insights into how your data has evolved and help you make better data-driven decisions

 

Conclusion

By investing in Snowflake, you are planting your flag in technology that everyone can invest in for the future and minimise the number disparate data technology repositories.    The enhanced performance and speed offered by Snowflake not only streamlines your marketing operations but also enhance the overall customer experience. By leveraging real-time insights, you can personalise marketing communications, deliver targeted promotions, and respond promptly to customer interactions, fostering stronger relationships and driving higher engagement with your brand.

Share
]]>
Snowflake Unistore and Hybrid Tables – What Are They and How Can They Benefit Your Organisation? https://www.ccentric.co.uk/snowflake-unistore-and-hybrid-tables-what-are-they-and-how-can-they-benefit-your-organisation/ Thu, 11 Sep 2025 12:56:54 +0000 https://staging.ccentric.co.uk/?p=3604

In this article I want to explain how Snowflake’s new hybrid table support and Unistore architecture  opens up exciting new options for both Martech and Adtech use cases.

 

Hybrid tables

Since its release in 2015, many Snowflake-based data warehousing solutions have involved all OLTP being performed on external transactional databases such as Postgres, MySQL, or SQL Server, with period ETLs into (or out of) a centralised Snowflake data warehouse. Only once the data is in Snowflake (or similar DWH platform) can OLAP be performed on the transactional data. Similarly, performing OLTP on Snowflake OLAP data products involved ETLs from Snowflake into a transactional database. While this solution works well for the most part, it is not without its caveats. Namely:

  • Data latency between the two systems.
  • Maintenance of ETL pipelines.
  • Cost of executing ETL pipelines including request charges on the source database or Snowflake, data processing costs of a compute instances such as an EC2 instance, and data egress charges.

In aid of this, Snowflake has been rapidly developing a range of new features to enable workloads that go beyond the OLAP that Snowflake was initially intended for in 2015, bypassing the need for disparate data sources and complex ETLs. Amongst these new features are Hybrid Tables. While not a novel concept in the world of research (first described by Dr. Hasso Plattner in 2009), in the world of industry, Hybrid Tables are a recent development. On Snowflake, they were first made available in private preview in 2022 and later released into GA in Q4 last year – making now a great time to dive into the implications of this technology in the Snowflake ecosystem and the wider community.

Hybrid tables enable OLTP to be performed within Snowflake and integrated seamlessly with OLAP in a “Unistore” workload, facilitating a range of OTLP use cases from serving transactional data at high concurrency to a web application to business-critical financial transaction systems. The architecture satisfies all requirements of such systems, including entity/referential integrity and high-concurrency point read/write throughput. At the same time, analytic workloads can be carried out just like any standard snowflake table, completely asynchronously (without interruption to) the ongoing high-concurrency transactional processes that keep the application running at low latency.

 

Columnar vs. Row Storage Architectures (OLAP vs. OLTP)

In standard Snowflake tables, data is organised into compressed immutable columnar files in object storage (S3, Azure Blob, GCS), with one file per column per micro-partition (a logical and physical clustering of records) and each micro-partition containing tens of thousands to millions of records. Snowflake maintains micro-partition metadata including the range of values stored in each of the columnar files. When a query or DML operation with a filter predicate (e.g. ‘where’ clause or ‘join’ condition) is executed against the table, Snowflake does a lookup against the micro-partition metadata such that only files whose value ranges overlap with the filter predicate are downloaded to the warehouse to be scanned. This type of ‘pruning’ works exceptionally well for OLAP workloads which typically involve operations over large spans of a column – such as joins or aggregation. This is for a few reasons:

  • The process of clustering and pruning described above greatly reduces the amount of data that needs to be scanned, particularly when using predicates over natural dimensions of the data such as dates.
  • Columnar compression greatly reduces the volume of data being transferred over the network when table data is being copied over from object storage to the compute warehouse.
  • Columnar storage means only the columns specified in the select clause are transferred.
  • Warehouses cache table data and interim results such as outputs from joins to expedite sequences of similar queries.

Since the columnar files are immutable, when DML operations are executed against the table, all micro-partitions containing affected records are locked for the duration of the operation – even if only a single record is being updated. This means that if process A is updating a single record when process B executes a DML operation on a single record out of the thousands or millions of records that happen to reside in the same micro-partition, it cannot do so until process A has:

  • Copied the files containing possibly millions of values from object storage to the warehouse
  • Decompressed the files
  • Scanned the files
  • Processed the files to update the record
  • Compressed the new created files containing the updated field (remember the columnar files are immutable).
  • Copied the new file over to object storage (S3)

While this process is efficient on OLAP ‘bulk’-type workloads, as they do not tend to have many concurrent low-volume writes, this architecture would result in poor performance for OLTP-type workloads which typically have high volumes of concurrent random point (single/few records) writes. Similarly for point-read operations, working with columnar files containing thousands to millions of values is also extremely inefficient.

On the other hand, traditional OLTP-optimised databases such as MySQL and Postgres use a row store in which records are stored completely independently of one another. As a result, they have the following properties:

  • Row-level locking, as opposed to micro-partition level locking, gives this architecture far greater efficiency when dealing with many concurrent random point read/write operations – especially on smaller tables.
  • Only the relevant records are retrieved during read/write operations, as opposed to the whole micro-partition. When a single operation only deals with a small number of records (as is typical in OLTP), the result is that the volume of data transfer is far smaller due to the reduced redundancy. It should be noted that when dealing with many records per operation (as is typical in OLAP), the columnar compression and column separation outweighs the redundancy.

 

Enforcement of entity and referential integrity are another essential requirement for guaranteeing data correctness in OLTP. While Snowflake allows users to ‘define’ primary keys on standard tables, this is only descriptive as there is no built-in functionality to enforce primary key uniqueness or referential integrity on standard Snowflake tables. On the other hand, traditional row-store-based databases such as MySQL or Postgres have entity and referential constraints built into them. This means that any operation that tries to break these constraints, such as deleting a record whose primary key is a foreign key in another tables – resulting in an ‘orphaned’ record, would be blocked and result in an error. I spent some time working in a data acceptance testing team for a migration over to Snowflake and found this to be a recurring issue in the presentation layer for an Adobe Campaign Monitor backend.

Snowflake Unistore

Hybrid Tables combine both columnar and row storage architectures into a single logical database object. With the row store as the primary storage, Hybrid Tables satisfy the referential and entity integrity requirements of OLTP and are well optimised for typical OLTP high-concurrency point read/write workloads. Asynchronously, a secondary columnar object storage is maintained. This is identical to that of Snowflake’s standard tables, meaning more Snowflake-typical OLAP workloads involving large scans can be done directly on the table without interruption or performance impact on the ongoing OLTP.

While this may sound complex, Snowflake users only get a single view of the logical Hybrid Table, even though it comprises two underlying data structures. When a query is executed against the Hybrid Table, the query optimiser automatically chooses on which data structure the operation will take place.

Optimised Bulk Loading

Snowflake users have the option of using optimised bulk loading to load data into Hybrid Tables. This method is significantly faster and most cost-effective than loading data into Hybrid Tables incrementally (depending on your solution) when dealing with loading large volumes of data into the table. At the time this was written, optimised bulk loading only kicks in on the initial load into the table – i.e., even if the table is empty but there were records that were deleted, optimised bulk loading will not be used.

Until recently, a limitation of Snowflake Hybrid Tables was that bulk loading could not be used in conjunction with foreign keys as it was only supported by CTAS statements. Only in January 2025, Snowflake announced support for optimised bulk loading with INSERT INTO and COPY INTO statements (provided the table has always been empty), whereby the user can define the foreign keys in the CREATE TABLE statement and then bulk load into it. Further, Snowflake have announced that they intend to add optimised bulk loading for incremental batch loads in the future.

Consistency and Latency Between Row and Columnar Stores

Users can choose between a session-based or global consistency model via the “READ_LATEST_WRITES = true/false” option. If set to false, the default, users can expect data staleness of up to 100ms between sessions (from row-store to columnar-store), and zero staleness within the session. If set to true, there is no data staleness however the latency of operations on the row store may increase by a few milliseconds (according to the Snowflake doc). Ultimately this depends on use case, but in most OLAP scenarios 100ms is negligible.

Cost

Snowflake uses the same compute pricing model regardless of table type. Accounts are charged based same per-second billing of the compute warehouse used for processing on the Hybrid Table. Generally, OLTP should be done on a Hybrid Table using a multi-cluster XS warehouse and scaled ‘out’ rather than ‘up’ with workload – meaning increasing the MAX_CLUSTER_COUNT parameter rather than the warehouse size when defining or altering the warehouse.

Snowflake charges users $40 per compressed TB (at the time this was written, depending on the Snowflake Edition). Users should expect to incur an additional storage cost due to the dual data structure architecture. The secondary columnar storage cost is the same as it would be if it were a standard Snowflake table. In addition to this, users must pay the cost of the row storage – which tends to be higher due to the lack of columnar compression. Therefore, users can expect to pay more than double the storage cost of a standard Snowflake table of equal size, given that they are paying for the combined storage cost of the two architectures.

The solutions that a hybrid-table-based methodology would be replacing would typically involve:

  • An RDS (or Azure/GCP equivalent) instance hosting the application database – these can be quite expensive.
  • ETL costs for:
    • Execution of extraction queries on the source database.
    • Costs of running a compute instance (or lambda function) to execute extraction queries and possibly perform some data transformation.
    • Data egress costs of moving data from the RDS instance to the compute instance and onto the Snowflake stage.
    • Snowflake compute costs for ingesting the data.

Use Case

With recent improvements in Snowflake’s Hybrid Table technology and release into general availability, the number of organisations incorporating Hybrid Tables into their solutions has grown rapidly.

We have implemented Hybrid tables for use in a real time personalisation system for a ticketing platform. We needed to provide recommendations & promotions within the user visit pre and post ticket purchase. These promotions were part of a Retail Media implementation  where we managed by a graph that allowed 3rd party organisers/promoters to select audiences and define on site promotions for their events. By serving promotion treatment from Hybrid Tables we reduced lookup latency. It also allowed us to maintain unified governance of  data . All sensitive data was kept within Snowflake & we avoided data wrangling and synch with 3rd party data stores.  Event transactions from these promotions (impressions/clicks) captured in hybrid data was available for instant analysis via the Unistore & OLAP tables. The performance of Snowflake was key as we had hundreds of concurrent reporting users across the 3rd party advertiser base.

Snowflake is an excellent environment for generating recommendations for customers – especially with its latest efforts with Snowpark ML and Snowflake Cortex. Precomputed treatments can be bulk generated based on feeds from feeds  as well as customer browsing behaviour. A Task (basically Snowflake’s cron job service) can schedule ‘optimised bulk loads’ into the Hybrid Table using a CTAS statement. Single-customer recommendations, which are point-read operations, can then be queried by the web application’s business logic, serving fresh OLAP-generated recommendation data at the “double digit millisecond” latency required by such applications. What really makes this a great use case is that previously you would have had to load  precomputed treatments back into some relational database system on a much less frequent basis. With this solution, promotional treatments can be generated and refreshed on a much more frequent basis with much lower latency.

 

 

Key Limitations

In their documentation, Snowflake go into detail about the current limitations of Hybrid Tables. Here, I’m just going to outline the three most important limitations that you should consider if you are thinking about implementing Snowflake Hybrid Tables as part of your solution.

AWS Only

At the rate of new features being added to Hybrid Tables, it’s likely these will come to Azure and GCP at some point, however I could not find any mention of any plans to add this technology outside of AWS.

Data Quality & Constraints

This should go without saying but the level of data quality required in a Snowflake Hybrid Table is higher than what is required from a standard snowflake table. Referential integrity, primary keys, uniqueness constraints, and stricter constraints on COPY INTO statements must all be adhered to (like any traditional RDBMS like Postgres) so you may need to do significantly more data processing and cleaning to get the data into a format that is acceptable for OLTP.

Quotas and Throttling

While Hybrid Tables have continuously seen significant improvements in their performance in the last year or so, the number of read/write operations per second is still capped at a quota of only 8,000 operations per second in a balanced 80/20 read/write workload. By contrast, under optimal conditions, typical OLTP-type systems such as Postgres can handle millions of requests per second. If your sole requirement is to maximise performance under and OLTP workload, then Snowflake Hybrid Tables are a long way off in this regard. With this, it is important to note that transactional databases seen decades of incremental optimisations, whereas Hybrid Tables are a relatively new development and are likely to continue to see rapid improvement in the next couple of years.

Conclusion

At present, the Snowflake Unistore is not a one-for-one replacement for transactional databases in all OLTP workloads, and deciding whether this technology is a good fit for your organisation’s solution will require careful consideration of their strengths and limitations – especially regarding throughput. With that being said, even in their infancy, Hybrid Tables have already seen adoption by a range of organisations with a variety of use cases – speaking to the potential unlocked by the ability to seamlessly integrate OLAP with OLTP in a single environment, and with low/zero staleness between the two. While the concept of Unistore has existed for some 15 years, only recently have we seen platforms such as Snowflake, BigQuery, and Power BI include the technology as part of their offerings. Although there are still some limitations, rapid advancements in the technology, combined with growing adoption by organisations, indicate that Unistore and Hybrid Tables may see much more widespread use in the future.

Share
]]>
CCentric retains ISO 27001:2022 Certification https://www.ccentric.co.uk/ccentric-retains-iso-270012022-certification/ Thu, 11 Sep 2025 09:33:18 +0000 https://staging.ccentric.co.uk/?p=3587

C-Centric passed its annual recertification of the ISO 27001:2022 certification – the world’s best-known standards for information security management systems (ISMS).

Achieving ISO 27001 certification involves undergoing a thorough assessment by an accredited certification body to ensure compliance with the standard’s requirements. It provides a formal recognition that a business has effectively implemented information security controls and practices.

“ISO 27001 is a key element of our technological roadmap and represents our dedication to safeguarding our customers sensitive data and ensuring the highest standards of security and compliance,” says David McKee Technology Director

“Our commitment to robust information security and operational excellence drives us to continuously enhance our processes, invest in cutting-edge technologies, and foster a culture of vigilance throughout our company. With ISO 27001, we have fortified our position as a trusted partner, providing peace of mind to our customers and reaffirming our relentless pursuit of maintaining the highest levels of security and trust in everything we do.”

ISO 27001 covers various aspects of information security, including risk assessment and management, asset management, access control, cryptography, physical security, business continuity, and incident management. By implementing ISO 27001, C-Centric has demonstrated its commitment to protecting the confidentiality, integrity, and availability of our business and customers’ information assets.

ISO 27001 certification is currently the most widely adopted international information security standard used by companies worldwide. By following ISO 27001, businesses can be confident that their Information Security Management Systems (ISMS) are up to date and comply with current best practices.

Share
]]>
Webinar: Case study – building a Gen AI Digital assistant for telco switching https://www.ccentric.co.uk/webinar-case-study-building-a-gen-ai-digital-assistant-for-telco-switching/ Tue, 05 Aug 2025 09:41:11 +0000 https://staging.ccentric.co.uk/?p=3595 ]]>