Infrastructure and Data Management

Photo of Eric BurgenerOffline

Some recent acquisitions in the SDS market - Nutanix bought PernixData and Red Hat bought Permabit - highlight a cautionary adage I often heard when working with venture capitalists in the past. When evaluating the future prospects of a funding opportunity, VCs want to understand whether a new business idea is a standalone product or is really just a feature that will quickly be integrated into a platform (presumably owned and shipped by someone else).


Photo of Ashish NadkarniOffline

New approaches to infrastructure design are required for businesses to keep up with the amount of data that is generated, and whose timely analysis is of paramount importance for the business to remain competitive in the digital economy. Newer approaches to infrastructure must focus on efficiency to minimize budgetary shocks on IT departments, and agility to respond to business needs on-demand. Businesses are embracing new-generation applications to prepare themselves for the future, while maintaining current-gen applications that support revenue-generating operations.

Composable infrastructure technologies from vendors like TidalScale are designed with these key objectives in mind. They are designed to support both current and new generation of applications, thus enabling IT to better service revenue-generating operations while also supporting their business foray into the future. Crucially, Composable software solutions are software defined, and maximize return on investments in server hardware by pooling compute, memory and disk resources for maximum efficiency, utilization, and visibility across the entire datacenter, and not just a cluster of servers.


Photo of Ashish NadkarniOffline

IoT is bridging the IT–OT divide rapidly. Data is no longer just under the purview of IT. Smart and connected devices, which are under the purview of OT, enable data collection, control and actuation, and enable additional IT-centric applications. The need to collect, store, and analyze data in a cost-efficient and timely manner means that IT and OT architecture and operations models need to converge and coexist. Software-defined OT (SD-OT) and IT–OT convergence are part of an “Intelligent Edge." Converged IT/OT Systems minimize data transfer between the core and edge, and carry out OT and IT functions seamlessly.  SD-OT moves OT functions into the software running on industry-standard hardware. OT control and data acquisition functions are network-based and can be performed from the Core or anywhere at the Edge. Converging IT and OT means running IT and OT software on the same core and edge infrastructure tier and possibly on the same physical hardware.


Photo of Eric BurgenerOffline

Some Thoughts on the Demise of Violin Memory

By Eric Burgener

Violin Memory, one of the early high flyers in the All Flash Array (AFA) space, filed for Chapter 11 bankruptcy in December 2016. This blog discusses some of the issues around their predicament, and takes a look at how the AFA market's use of custom flash modules (CFMs) (which Violin used in the Flash Storage Platform) has been impacted in enterprise-class arrays over the last couple of years.


Photo of Ashish NadkarniOffline

The upcoming OpenStack Summit in Barcelona builds on a familiar theme: The unprecedented momentum that OpenStack has gained (and continues to gain) amongst firms of all shapes and sizes: enterprises, cloud and telecom services providers and hyperscalers. At the summit, the community will seek to showcase the fruits of streamlining product development and project coordination, maintaining currency with market trends, and more importantly that it is actively listening to its constituents of developers, end-users and vendors. IDC anticipates that their release message for the “Newton” release “One versatile platform” backed by key themes such as scalability, resiliency and user experience, will resonate stronger and louder with attendees, setting the stage for an even bigger footprint for OpenStack in 2017.


Photo of Eric BurgenerOffline

As flash storage and network throughput evolve through the next several technology generations, a significant imbalance looms. As organizations decide which storage architecture they should go with - network storage or hyperconverged - it is important to understand how these two technologies are evolving in their own IT infrastructure.


Photo of Ashish NadkarniOffline

At the recently held HPE Discover conference, HPE made Synergy - it's foray into Composable Infrastructure solutions. This announcement is timely as IDC is in the process of formalizing its research on Composable and Disaggregated Infrastructure. This blog post is meant to provide a quick take on how IDC's views this technology, and the impact it will have on the storage, server and networking markets.


Photo of Eric BurgenerOffline

Starting with the June 2016 Tracker release, we will be using an updated All Flash Array (AFA) taxonomy that is more inclusive. In a nutshell any arrays that ship from the factory in all-flash configurations and do NOT optionally support hard disk drives (HDDs) will be considered AFAs. There will be three classes (or types) of AFAs, defined based on pedigree, to help customers understand key differences between them.


Photo of Eric BurgenerOffline

Performance Games?

By Eric Burgener

Performance numbers released by vendors about their storage arrays are often based on "hero tests" that do not provide much help in communicating how a system will perform on real-world workloads. Should the vendor community strive for more realistic tests, or should customers come to better understand the limitations of existing tests? This blog explores these topics based on an informal lunch time round table discussion at IDC Directions West on March 2.


Photo of Ashish NadkarniOffline

This blog post discusses flash-enabled storage architectures like the new EMC VMAX All-Flash that will continue to underpin the modern datacenter, but additionally enable new workloads and drive economic benefits. Flash has compelled storage suppliers like EMC to go back to the drawing board – to re-engineer storage architectures and capitalize on the transformational value of flash. The results are systems like the VMAX All Flash, which delivers unprecedented levels of performance and scale while bringing the gold standard of VMAX services that customers have come to rely on.


Search this area

About this channel

  • 564k views
  • 86 articles
  • 8 followers
     

IDC's Infrastructure and Data Management Blog is the home for IDC storage analysts to share their thoughts on technology, market and industry trends, announcements, movers and shakers, innovative ideas, and recent research.

Recent Contributors


Viewed 564,099 times