Infrastructure and Data Management
Some recent acquisitions in the SDS market - Nutanix bought PernixData and Red Hat bought Permabit - highlight a cautionary adage I often heard when working with venture capitalists in the past. When evaluating the future prospects of a funding opportunity, VCs want to understand whether a new business idea is a standalone product or is really just a feature that will quickly be integrated into a platform (presumably owned and shipped by someone else).
Software-defined storage (SDS) is a high growth area that is bringing some strong benefits – better agility, easier storage management, and reduced CAPEX – to those IT organizations that have the requisite skill to deploy it effectively. Some recent acquisitions in the SDS market – Nutanix bought PernixData and Red Hat bought Permabit – highlight a cautionary adage I often heard when working with venture capitalists in my past that is particularly relevant for software products. When evaluating…
New approaches to infrastructure design are required for businesses to keep up with the amount of data that is generated, and whose timely analysis is of paramount importance for the business to remain competitive in the digital economy. Newer approaches to infrastructure must focus on efficiency to minimize budgetary shocks on IT departments, and agility to respond to business needs on-demand. Businesses are embracing new-generation applications to prepare themselves for the future, while maintaining current-gen applications that support revenue-generating operations.
Composable infrastructure technologies from vendors like TidalScale are designed with these key objectives in mind. They are designed to support both current and new generation of applications, thus enabling IT to better service revenue-generating operations while also supporting their business foray into the future. Crucially, Composable software solutions are software defined, and maximize return on investments in server hardware by pooling compute, memory and disk resources for maximum efficiency, utilization, and visibility across the entire datacenter, and not just a cluster of servers.
How does composable infrastructure add value in today’s modern IT environment?
Current-generation IT infrastructure can be rigid and siloed, making it difficult for IT to deliver quickly on the demands of new-generation applications going into production. As businesses embrace NGAs, they are adopting an application-centric approach to IT — building environments that require new levels of scale, automation, and flexibility. This model means a shift from a static and inflexible infrastructure to…
IoT is bridging the IT–OT divide rapidly. Data is no longer just under the purview of IT. Smart and connected devices, which are under the purview of OT, enable data collection, control and actuation, and enable additional IT-centric applications. The need to collect, store, and analyze data in a cost-efficient and timely manner means that IT and OT architecture and operations models need to converge and coexist. Software-defined OT (SD-OT) and IT–OT convergence are part of an “Intelligent Edge." Converged IT/OT Systems minimize data transfer between the core and edge, and carry out OT and IT functions seamlessly. SD-OT moves OT functions into the software running on industry-standard hardware. OT control and data acquisition functions are network-based and can be performed from the Core or anywhere at the Edge. Converging IT and OT means running IT and OT software on the same core and edge infrastructure tier and possibly on the same physical hardware.
This is an excerpt from an IDC Perspective posted on idc.com on the topic of SD-OT and Intelligent Edge. Link here.
Firms embark on Digital Transformation (DX) Initiatives by embracing a data-driven, analytics-first approach to improve business processes and increase operational efficiencies, better understand customer behavior and preferences, and build deeper customer, supplier and/or partner relationships, and more importantly, be prudent about how they gain insight from the data they…
Violin Memory, one of the early high flyers in the All Flash Array (AFA) space, filed for Chapter 11 bankruptcy in December 2016. This blog discusses some of the issues around their predicament, and takes a look at how the AFA market's use of custom flash modules (CFMs) (which Violin used in the Flash Storage Platform) has been impacted in enterprise-class arrays over the last couple of years.
The demise of Violin Memory, one of the early high flyers of the AFA market, has not been greatly exaggerated. The NYSE suspended trading in Violin Memory shares on October 28, 2016, and delisted its stock because it had not maintained an average global market capitalization of $15M over 30 consecutive trading days. From a revenue high of around $108M for their fiscal 2014, the company steadily shrank revenues in the wake of their September 2013 IPO. The company officially filed for Chapter 11…
The upcoming OpenStack Summit in Barcelona builds on a familiar theme: The unprecedented momentum that OpenStack has gained (and continues to gain) amongst firms of all shapes and sizes: enterprises, cloud and telecom services providers and hyperscalers. At the summit, the community will seek to showcase the fruits of streamlining product development and project coordination, maintaining currency with market trends, and more importantly that it is actively listening to its constituents of developers, end-users and vendors. IDC anticipates that their release message for the “Newton” release “One versatile platform” backed by key themes such as scalability, resiliency and user experience, will resonate stronger and louder with attendees, setting the stage for an even bigger footprint for OpenStack in 2017.
The OpenStack Summit in Barcelona to be held from October 25-27th 2016 could not been held at a more opportune time for the community. As 2016 draws to a close, it will no-doubt get labeled as a year that saw noteworthy mergers, splits, spin-offs and even “spin-mergers” – all of which will collectively leave an indelible mark on the industry landscape. Call it what you may, the fact is that much of the IT industry is being disrupted, and is responding accordingly. In such trying times, this…
As flash storage and network throughput evolve through the next several technology generations, a significant imbalance looms. As organizations decide which storage architecture they should go with - network storage or hyperconverged - it is important to understand how these two technologies are evolving in their own IT infrastructure.
Over the last several decades, processor performance improvements significantly outpaced storage performance improvements. All Flash Arrays (AFAs) which leverage flash media have helped to close this gap, which is one of the main drivers of their rapid penetration into mainstream datacenter infrastructure. Network performance is another key determinant in the actual performance that applications see, and existing network latencies and bandwidths have made flash performance accessible to them. …
At the recently held HPE Discover conference, HPE made Synergy - it's foray into Composable Infrastructure solutions. This announcement is timely as IDC is in the process of formalizing its research on Composable and Disaggregated Infrastructure. This blog post is meant to provide a quick take on how IDC's views this technology, and the impact it will have on the storage, server and networking markets.
HPE Synergy - HPE's foray into the world of Composable Infrastructure - was one of the key highlights of the 2016 HPE Discover event. While the product, portfolio and solutions were announced late last year, Discover set the stage for the strategic significance of the "Synergy" brand for HPE. Given that IDC is in the midst of its own research on the topic of Composable and Disaggregated Infrastructures, this blog post (a series of questions and answers) serves to provide a quick take on how we…
Starting with the June 2016 Tracker release, we will be using an updated All Flash Array (AFA) taxonomy that is more inclusive. In a nutshell any arrays that ship from the factory in all-flash configurations and do NOT optionally support hard disk drives (HDDs) will be considered AFAs. There will be three classes (or types) of AFAs, defined based on pedigree, to help customers understand key differences between them.
Starting with the June 2016 Tracker release, we will be using an updated All Flash Array (AFA) taxonomy that is more inclusive. In a nutshell, any arrays that ship from the factory in all-flash configurations and do NOT optionally support hard disk drives (HDDs) will be considered AFAs. There will be three classes (or types) of AFAs, defined based on pedigree, to help customers understand key differences between them, but our revenue and capacity forecasts will not include that level of…
Performance numbers released by vendors about their storage arrays are often based on "hero tests" that do not provide much help in communicating how a system will perform on real-world workloads. Should the vendor community strive for more realistic tests, or should customers come to better understand the limitations of existing tests? This blog explores these topics based on an informal lunch time round table discussion at IDC Directions West on March 2.
Enterprise storage marketers want to be able to apply some sort of performance characterization to their systems to help customers understand the capabilities of a storage array. At IDC Directions West in San Jose on March 2, I had an interesting lunch time conversation with a group of vendors on this topic.
Storage vendors have always used "hero tests" to generate the numbers they use to market their systems. A "hero test" is a test designed to produce the best performance number – for…
This blog post discusses flash-enabled storage architectures like the new EMC VMAX All-Flash that will continue to underpin the modern datacenter, but additionally enable new workloads and drive economic benefits. Flash has compelled storage suppliers like EMC to go back to the drawing board – to re-engineer storage architectures and capitalize on the transformational value of flash. The results are systems like the VMAX All Flash, which delivers unprecedented levels of performance and scale while bringing the gold standard of VMAX services that customers have come to rely on.
Flash-enabled storage architectures have been in existence for a few years now. They underpin the Modern Data Center, which comprises a shared infrastructure resource model governed by policy-based service quality and automated enforcement of service level objectives. Flash boosts the ability of a storage system to scale linearly in terms of capacity and performance. Flash also vastly improves the predictability of the storage system.
Some suppliers, such as EMC, are pioneering the task of…
About this channel
- 564k views
- 86 articles
- 8 followers
IDC's Infrastructure and Data Management Blog is the home for IDC storage analysts to share their thoughts on technology, market and industry trends, announcements, movers and shakers, innovative ideas, and recent research.
- October 2012 2
- November 2012 6
- December 2012 2
- January 2013 6
- February 2013 7
- March 2013 3
- April 2013 8
- May 2013 2
- June 2013 4
- September 2013 2
- October 2013 1
- January 2014 1
- May 2014 3
- July 2014 3
- August 2014 6
- January 2015 1
- March 2015 1
- April 2015 1
- June 2015 1
- July 2015 1
- August 2015 2
- September 2015 1
- December 2015 1
- February 2016 1
- March 2016 2
- May 2016 1
- June 2016 1
- July 2016 1
- October 2016 1
- March 2017 1
- July 2017 2
- August 2017 3
- December 2017 1
- February 2018 2
- April 2018 1
- May 2018 1
- September 2018 2
- November 2018 1