Huge April Birthday Celebrations at the Pune Branch!

At our Pune, India branch, we celebrated April birthdays on the 25th! A total of 9 birthdays were celebrated, with a turnout so large we had to use the panoramic setting on the camera!

Also, it just so happens that Shyam, Vice President of our Middleware Practice, was in town for business. He was able to witness and join in on the festivities!

DSC00276 DSC00277 DSC00280 DSC00283 IMG_2905

Tagged with: , , ,

Team Serene-AST at CX World!

It’s day 2 of CX World and, as always, Serene-AST is having a blast meeting and reconnecting with people from the industry!

The days here are flying by–CX World is almost over! Since time is running out, come and stop by our kiosk! We are at MSL-17 in the Sales section of the Exhibit Hall. We will be here today from 7:30 AM – 7:30 PM and tomorrow, from 7:45 AM – 5:00 PM.


Tagged with: , ,

Kiosk Opening Tonight at CX World!

shutterstock_591103670Welcome, welcome to CX World 2017! We’re so excited to be here to connect with and meet new and old members of the community!

We will be here during opening night, so stop by any time from 5:00 – 6:30 PM during the Welcome Reception! In addition to tonight, we’ll be here Wednesday, from 7:30 AM – 7:30 PM and Thursday, from 7:45 AM – 5 PM.

Throughout the week, we’ll be discussing the most relevant topics in CX! See you soon, and have fun!

Tagged with: , ,

We’re Presenting at CX World!

shutterstock_320566076Our team members are currently gearing up to travel to Vegas! And you should gear up too–for the presentation we’ll be giving at the conference!

One of our clients will be discussing our CPQ solution in the presentation, CPQ: Heatmaps Its Way to Greater Sales Efficiency.

The presentation will be on Wednesday, April 26, 2017 at 2PM in Theater 1. Don’t miss it!

Tagged with: , , ,

CX World is Just Around the Corner!

MCXTradeshow season is in full force as many Serene-AST team members gear up to head to Modern Customer Experience 2017 April 25-27 in Las Vegas, NV!

Modern Customer Experience is a conference where more than 3,000 attendees will meet to network with industry peers and thought leaders, receive hands-on training, and listen in on all the sessions.

Stay tuned for more information about the Serene-AST kiosk, presentations, as well as general updates while our team is there!

Tagged with: , ,

Understanding Enterprise Data Governance: Part 4

MDM and Big Data Make Each Other Better

So far, we have discussed Enterprise Data Governance topics. This is the third blog post in a series exploring Enterprise Data Governance.  In the first one, we briefly defined transaction data, metadata, master data, reference data, and dimensional data. In the second part, we further explored reference data and its role in data governance solutions. In the third part, we discussed data governance needs within Financial Services, a highly-regulated industry, and how other industries can benefit from these capabilities.

In this installment, we bring Big Data into the discussion.

Big Data allows companies to process data sets that are too large to handle by traditional means.  These data sets can originate from within the company; for example, a large airline may produce massive volumes of diagnostic data every hour, which is far beyond what is cost effective to store long-term.   Many companies are focused on data originating from sources outside the enterprise, such as social media, financial instrument performance, or weather monitoring.  With so many varied sources of Big Data available, can big data be governed?  If so, is it worth the effort?

Before answer those questions, it’s important to point out that Big Data vendors may be pushing features of their software solutions instead of discussing big data governance.  Product vendors tend to discuss Big Data use cases from the factory perspective; in other words, the types of data sought or the processes being built.

Vendors will typically cover information such as:

  • Social Media Exploration
  • Internet of Things
  • Data Warehouse Modernization

However, it is essential for Big Data Strategy to include the ability to drive value from Big Data insights across relevant use cases, since use cases drive the investment. That’s where Master Data Management (MDM) comes into play.

The following should be considered:

  • Customer Analytics
  • Product Marketing Effectiveness
  • Operational Efficiencies
  • Merger and Acquisition Impacts
  • Market Opportunity Analysis

The key is understanding what value propositions are sought when investing in Big Data solutions; this will allow companies to gain a competitive advantage. Rather than attempting to govern what may be “ungovernable,” MDM seeks to bring clarity to the key aspects of the business that drive performance. This, in turn, lends clarity to key business drivers that can be improved through Big Data analysis. In other words, MDM facilitates an increase in ROI from Big Data investment by focusing on driving analysis from well-governed enterprise data.

One of the fundamental Big Data principles is that greater insights can be attained from aggregations and statistics than can be gleaned from any individual record.  For example, in order to analyze consumer sentiments regarding a product, the company may mine social media for data. However, this produces some challenges: brand sentiment is often easier to analyze than sentiment towards specific products. To solve this, MDM is fundamental, and mines Big Data based on a cleansed and consolidated list of products.

All companies need to address similar challenges just to obtain the right subset of Big Data to analyze.  Once companies have assembled the proper datasets, what separates their effectiveness in the analysis stage is the ability to leverage master data to create meaningful aggregations.  A company that can analyze customer sentiment across geographic, business region, and operational cost dimensions will be able to make more rapid and meaningful business process adjustments than a competitor that only considers geography.  Only enterprises with well-managed MDM programs can make adjustments to business practices based on this analysis with confidence.

After the initial implementation, an effective Big Data strategy will plan for growth along the capability-maturity learning curve.  A useful analogy is how master reference data is used to manage acquisitions in a phased approach.  When a business is acquired, its chart of accounts is mapped onto the parent company’s chart to produce consolidated financial results.  Sometimes the parent company’s chart of accounts must be extended to accommodate the new business.  These data sets and mappings then make their way into the data warehouse.  For conglomerates, that may be as far as it goes, but in many cases the acquired business ultimately moves to the parent company’s chart of accounts and systems, where MDM then supports a full-blown financial transformation process within the acquired business.

Big Data follows a similar progression, where master and reference data provide the mappings for external, unstructured data sources to align with internal data sources for analytics.  As the Big Data processes mature, they influence governance processes, which extends the validated code sets and mappings to accommodate the high-value, unstructured data sources.  This establishes an ongoing feedback loop between MDM and Big Data that increases the effectiveness of both.

Process alignment between MDM & Big Data is critical to maximizing these synergies.  There are a multitude of valid technical options, which are of secondary importance to the business and data governance use cases.  For example, many data architects have a preconceived notion that MDM should push master data into the data lake to better support the Big Data best practice of “Transform in Place”.  While this is certainly an option, solutions like Oracle’s Big Data Appliance include highly scalable technologies that allow Hadoop file storage to be directly accessed by SQL and integration technologies (bypassing batch MapReduce processing entirely), making mapping and transforming unstructured data in Middleware an extensible approach.

In summary, Big Data analytics resemble traditional Data Warehouse analytics in that the better the data is governed, the better the insights from analysis will be.  This will always be true, regardless of the technologies utilized.

Tagged with: , ,

Pune Branch Celebrates Women’s Day!

March 8th saw the international celebration of Women’s Day, a day that commemorates that movement for women’s rights and the observance of women’s achievements.

So, on March 8th, 2017, our office in Pune, India also held a celebration, with some tasty snacks, of course!DSC00221

Tagged with: , ,

Understanding Enterprise Data Governance: Part 3

This is the third blog post in a series exploring Enterprise Data Governance.  In the first one, we briefly defined transaction data, metadata, master data, reference data, and dimensional data. In the second part, we further explored reference data and its role in data governance solutions. For this installment, we will discuss data governance needs within Financial Services, a highly-regulated industry, and how other industries can benefit from these capabilities.

Most consultants would guess that data privacy is the primary data governance concern for most Financial Services executives.  Data privacy is a critical concern, and cannot be ignored in the normal course of running a stable and profitable Financial Services business.  Maintaining profitability also requires complete, timely, and accurate data to support operational decisions that align with the company strategy, in addition to regulations that require unprecedented levels of transparency and accountability.

The Sarbanes-Oxely Act was passed in 2002 to protect investors from companies’ potentially fraudulent accounting activities. It is well known that this act, which affected all US corporations, legislated individual responsibility, including some personal liability, for key executives in ensuring the accuracy and completeness of financial statements.

The Financial Transparency Act of 2015 further requires that US companies in the Financial sector make the data their financial statements are based upon open and searchable by regulatory bodies, such as the SEC.  There is also a lesser-known provision within the Financial Transparency Act that states that maintenance of reference data supporting financial reporting be made available for audit, including who made the change and when.  It explicitly states that if reporting hierarchies and selection criteria are maintained in spreadsheets, those spreadsheets need to include macros that accurately capture and retain the required fields to support regulatory audits.

Many of the largest Financial Services providers in the US, including American Express, Bank of America, Chase Bank, Wells Fargo, and dozens of others were prepared to meet these regulations since they were using Oracle Data Relationship Management (DRM) to master critical financial reference data.  DRM also has a large global customer base, and it is no small wonder that DRM not only meets and exceeds these stringent regulatory requirements, but also allows its customers to manage reference data across a broad array of enterprise systems, data warehouses, and reporting solutions from a single point of entry and validation.  This is key for an industry full of behemoths that have grown via mergers and acquisitions, often requiring them to manage extreme complexity in mapping their internal management processes to their externally reported line of business financial results with full confidence in both their accuracy and their audit transparency.

For the past decade, DRM has been the most powerful and complete reference data and dimension management solution commercially available, but its use has historically been mostly limited to the upper echelon of industry leaders due in part to both its cost and its marketing focus.  That will change when Oracle releases the next generation of DRM on the Cloud.  An early release was demonstrated at Oracle Open World 2016 as Dimension Management Cloud Services, and we are hopeful to see its production release this year.  The Cloud promises to make this technology, which has the power to manage the most complex business models in existence, available to a broader customer base at an affordable price, with greatly simplified setup procedures.

Tagged with: , , ,

Understanding Enterprise Data Governance: Part 2

This is the second blog post in a series exploring Enterprise Data Governance.  In the first post, we briefly defined transaction data, metadata, master data, reference data, and dimensional data.  That discussion primarily focused on transactional data and metadata, and can be found here. In this post, we will further explore reference data and its role in data governance solutions.

As we move beyond transaction data and metadata, and into the realms of master and reference data, most academics and analysts tend to focus on solutions and methodologies rather than attempting to clearly differentiate between the types of data that need to be governed.  Not only does this introduce a solution bias, but it also leads to a tendency to lump these data categories together in master/slave relationships and leave it at that.  For example, reference data is commonly classified as a subset of master data, and dimensional data as a subset of reference data.

Technically, there is nothing inaccurate about these assertions, but it would be a mistake to think that a single solution can fully address all of them without first gaining an understanding of the different challenges involved in governing these various types of data.  Only then can we accurately assess the solutions and technologies that are best suited to the task.  For this purpose, we will treat master data, reference data, and dimensional data as separate, distinct categories from a governance perspective.

Reference data is the easiest of the three types to understand.  It is made up of various lists and code sets that are used to classify and organize data.  Country codes, industry codes, status codes, account types, and employee types are among the many examples of reference data.  Reference data sets can vary wildly in size and complexity.  For example, there might only be a dozen or so valid account status codes, whereas there may be over a thousand valid industry codes.  Code sets related to product SKUs, financial instruments, and the like can be much larger, ranging into the hundreds of thousands or even millions of records in rare cases.

While the concept of reference data is easy to grasp, there can also be significant complexities that need to be addressed.  Some reference data sets are standardized by regulatory, or governing bodies, such as the International Standards Organization, which maintains standardized lists of country codes among other things.

Another example is the US Census Bureau, which maintains the North American Industry Classification System (NAICS).  It is common for companies to require internally managed alternate code sets as well.  For example, an COTS solution may include US territories in an internal State table, requiring this alternate list to be cross-referenced to standardized state and territory code sets for regulatory purposes.

Other reference data sets need to be controlled directly by the enterprise since they relate to how business is conducted.  Sales territories, lines of business, and departments are common examples.  As mentioned previously with the State table example, this can also include the configurations of code sets within applications, such as employee types and account status, when custom business processes need to be accommodated.

From a governance perspective, mastering reference data goes beyond maintaining traditional lookup tables.  The ability to maintain well-documented business and technical definitions of code set values, including data versioning and audit history, are essential.  Functionalities for maintaining and validating mappings between related code sets are also of vital importance.

Keep watch for the third part of this blog series, Understanding Enterprise Data Governance!

Tagged with: , , ,

Serene-AST Launches CPQ for Media Demo!

CPQ, media, demoSerene-AST is pleased to introduce its CPQ for the Media industry video demonstration. We’ve developed a revolutionary solution, specific to the media industry, that drives income for companies with a digital presence by leveraging Oracle CX Cloud solutions.

In the video, three separate use cases are demonstrated:

To view the full-length video, please click here. Also under the same account, separate videos for each use case can be found.

Look forward to more videos in the future!

Tagged with: , , ,