Reminders

Infrastructure and Data Management

Photo of Eric BurgenerOffline

I had a chance to spend a few days at the Flash Memory Summit in Santa Clara this year, and this blog highlights some of the recent announcements in the AFA space from the show. NVMe was a major theme of the show, and we are seeing more enterprise storage vendors announce NVMe-based features, products and roadmaps.


Photo of Ashish NadkarniOffline

Last week, I explored some of the key issues and core benefits that are prompting enterprises to move to more flexible and cost-effective composable infrastructures. As I pointed out in Part 1 of this blog, composable infrastructure technologies from vendors like TidalScale are designed to address many of the most pressing issues in today’s data centers, such as the rapid growth of data, the challenges of accommodating unpredictable workloads with traditional servers and rack systems, and the inherent inefficiency and outright waste that comes from provisioning servers that cannot address the needs of new-generation applications and those that are dedicated to running just one application. In this part, l will review the role of software-defined resources in ensuring that composable data centers are a realistic and cost-effective end goal for enterprise digital transformation.


Photo of Eric BurgenerOffline

Some recent acquisitions in the SDS market - Nutanix bought PernixData and Red Hat bought Permabit - highlight a cautionary adage I often heard when working with venture capitalists in the past. When evaluating the future prospects of a funding opportunity, VCs want to understand whether a new business idea is a standalone product or is really just a feature that will quickly be integrated into a platform (presumably owned and shipped by someone else).


Photo of Ashish NadkarniOffline

New approaches to infrastructure design are required for businesses to keep up with the amount of data that is generated, and whose timely analysis is of paramount importance for the business to remain competitive in the digital economy. Newer approaches to infrastructure must focus on efficiency to minimize budgetary shocks on IT departments, and agility to respond to business needs on-demand. Businesses are embracing new-generation applications to prepare themselves for the future, while maintaining current-gen applications that support revenue-generating operations.

Composable infrastructure technologies from vendors like TidalScale are designed with these key objectives in mind. They are designed to support both current and new generation of applications, thus enabling IT to better service revenue-generating operations while also supporting their business foray into the future. Crucially, Composable software solutions are software defined, and maximize return on investments in server hardware by pooling compute, memory and disk resources for maximum efficiency, utilization, and visibility across the entire datacenter, and not just a cluster of servers.


Photo of Ashish NadkarniOffline

IoT is bridging the IT–OT divide rapidly. Data is no longer just under the purview of IT. Smart and connected devices, which are under the purview of OT, enable data collection, control and actuation, and enable additional IT-centric applications. The need to collect, store, and analyze data in a cost-efficient and timely manner means that IT and OT architecture and operations models need to converge and coexist. Software-defined OT (SD-OT) and IT–OT convergence are part of an “Intelligent Edge." Converged IT/OT Systems minimize data transfer between the core and edge, and carry out OT and IT functions seamlessly.  SD-OT moves OT functions into the software running on industry-standard hardware. OT control and data acquisition functions are network-based and can be performed from the Core or anywhere at the Edge. Converging IT and OT means running IT and OT software on the same core and edge infrastructure tier and possibly on the same physical hardware.


Photo of Eric BurgenerOffline

Some Thoughts on the Demise of Violin Memory

By Eric Burgener

Violin Memory, one of the early high flyers in the All Flash Array (AFA) space, filed for Chapter 11 bankruptcy in December 2016. This blog discusses some of the issues around their predicament, and takes a look at how the AFA market's use of custom flash modules (CFMs) (which Violin used in the Flash Storage Platform) has been impacted in enterprise-class arrays over the last couple of years.


Photo of Ashish NadkarniOffline

The upcoming OpenStack Summit in Barcelona builds on a familiar theme: The unprecedented momentum that OpenStack has gained (and continues to gain) amongst firms of all shapes and sizes: enterprises, cloud and telecom services providers and hyperscalers. At the summit, the community will seek to showcase the fruits of streamlining product development and project coordination, maintaining currency with market trends, and more importantly that it is actively listening to its constituents of developers, end-users and vendors. IDC anticipates that their release message for the “Newton” release “One versatile platform” backed by key themes such as scalability, resiliency and user experience, will resonate stronger and louder with attendees, setting the stage for an even bigger footprint for OpenStack in 2017.


Photo of Eric BurgenerOffline

As flash storage and network throughput evolve through the next several technology generations, a significant imbalance looms. As organizations decide which storage architecture they should go with - network storage or hyperconverged - it is important to understand how these two technologies are evolving in their own IT infrastructure.


Photo of Ashish NadkarniOffline

At the recently held HPE Discover conference, HPE made Synergy - it's foray into Composable Infrastructure solutions. This announcement is timely as IDC is in the process of formalizing its research on Composable and Disaggregated Infrastructure. This blog post is meant to provide a quick take on how IDC's views this technology, and the impact it will have on the storage, server and networking markets.


Photo of Eric BurgenerOffline

Starting with the June 2016 Tracker release, we will be using an updated All Flash Array (AFA) taxonomy that is more inclusive. In a nutshell any arrays that ship from the factory in all-flash configurations and do NOT optionally support hard disk drives (HDDs) will be considered AFAs. There will be three classes (or types) of AFAs, defined based on pedigree, to help customers understand key differences between them.


Search this area

About this channel

  • 530k views
  • 80 articles
  • 7 followers
     

IDC's Infrastructure and Data Management Blog is the home for IDC storage analysts to share their thoughts on technology, market and industry trends, announcements, movers and shakers, innovative ideas, and recent research.

Recent Contributors


Viewed 530,196 times