Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    predictive analytics risk management
    How Predictive Analytics Is Redefining Risk Management Across Industries
    7 Min Read
    data analytics and gold trading
    Data Analytics and the New Era of Gold Trading
    9 Min Read
    composable analytics
    How Composable Analytics Unlocks Modular Agility for Data Teams
    9 Min Read
    data mining to find the right poly bag makers
    Using Data Analytics to Choose the Best Poly Mailer Bags
    12 Min Read
    data analytics for pharmacy trends
    How Data Analytics Is Tracking Trends in the Pharmacy Industry
    5 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Data Virtualization: 6 Best Practices to Help the Business ‘get it’
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Visualization > Data Virtualization: 6 Best Practices to Help the Business ‘get it’
AnalyticsBusiness IntelligenceData Visualization

Data Virtualization: 6 Best Practices to Help the Business ‘get it’

JoeMcKendrick
JoeMcKendrick
3 Min Read
SHARE

Something that doesn’t get talked about enough in the service orientation world is data virtualization. That is, it’s handy to be able to pull data from various sources into an abstracted service layer, versus having services or applications tapping live production databases. This helps cut down the need for physical storage, and provides a common interface for all applications using the data, especially BI, analytics, and transaction systems.

Something that doesn’t get talked about enough in the service orientation world is data virtualization. That is, it’s handy to be able to pull data from various sources into an abstracted service layer, versus having services or applications tapping live production databases. This helps cut down the need for physical storage, and provides a common interface for all applications using the data, especially BI, analytics, and transaction systems.

The whys and hows of data virtualization are explored by Judith Davis and Robert Eve in a new book, Going Beyond Traditional Data Integration to Achieve Business Agility. As with any service technology engagement, data virtualization involves a lot of players across the enterprise, so challenges tend to be more organizational and cultural than technical.

Davis and Eve outline 6 key best practices anyone undertaking a data virtualization effort needs to consider:

1) Centralize responsibility for data virtualization. “The key benefit here is the ability advance the effort quickly and to take on bigger concepts, such as defining common canonicals and implementing an intelligent storage component,” the authors say.

2) Agree on and implement a common data model. “This will ensure consistent, high quality data, make business users more confident in the data and make IT staff more agile and productive.”

3) Establish a governance approach. “This needs to include how to manage the data virtualization environment. Key issues are who is responsible for the shared infrastructure and for shared services.”

4) Educate the business side on the benefits of data virtualization. “Allocate time to consult with business users and make sure they understand the data,” Davis and Eve advise. “Establish an ongoing effort to make data virtualization acceptable to other areas of the organization.”

5) Pay attention to performance tuning and scalability. “Tune performance and test solution scalability early in the development process. Consider bringing in massively parallel processing capability to handle query performance on high-volume data. Accommodate the fact that users are unpredictable on ad hoc analysis and reporting.”

6) Take a phased approach to implementing data virtualization. “First abstract the data sources, then layer the BI applications on top and gradually implement the more advanced federation capabilities of data virtualization.”

 

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

street address database
Why Data-Driven Companies Rely on Accurate Street Address Databases
Big Data Exclusive
predictive analytics risk management
How Predictive Analytics Is Redefining Risk Management Across Industries
Analytics Exclusive Predictive Analytics
data analytics and gold trading
Data Analytics and the New Era of Gold Trading
Analytics Big Data Exclusive
student learning AI
Advanced Degrees Still Matter in an AI-Driven Job Market
Artificial Intelligence Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Top 10 Root Causes of Data Quality Problems: Part 4

4 Min Read

Behind the scenes of REvolution’s 64-bit Windows port of R

9 Min Read

Big Data Is More Valuable with Kapow

6 Min Read

Information-Driven Business: How to Manage Data and Information for Maximum Advantage

3 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data
AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?