Infrastructure and Data Management
The IDC AFA MarketScape evaluated 10 vendors' enterprise storage platforms on their ability to meet requirements for dense mixed workload consolidation that includes at least some mission-critical applications. In this rapidly maturing market, there is still much to differentiate vendors. This document should provide food for thought for both customers and vendors alike.
In mid-December, IDC released the IDC MarketScape: Worldwide All-Flash Array 2017 Vendor Assessment (IDC, December 2017). Given the state of market maturity in the AFA space, it was necessary to narrowly focus assessment to arrays that were specifically sold for dense mixed workload consolidation that included at least some mission-critical applications. Many AFA vendors now have a broad portfolio of AFA platforms, targeting each at different types of workloads and customers. Other…
I had a chance to spend a few days at the Flash Memory Summit in Santa Clara this year, and this blog highlights some of the recent announcements in the AFA space from the show. NVMe was a major theme of the show, and we are seeing more enterprise storage vendors announce NVMe-based features, products and roadmaps.
At the Flash Memory Summit at the Santa Clara Convention Center this year, NVMe technology was a mainstream theme. IDC research indicates that 48% of enterprises already have NVMe deployed in some manner in their IT shops, but 99%+ of this is as local storage that was purchased after market and configured into PCIe slots on commodity x86 servers. While there were several rack scale flash vendors at the show (Apeiron Data Systems, E8 Storage, Excelero), the rack scale flash market is still an…
Last week, I explored some of the key issues and core benefits that are prompting enterprises to move to more flexible and cost-effective composable infrastructures. As I pointed out in Part 1 of this blog, composable infrastructure technologies from vendors like TidalScale are designed to address many of the most pressing issues in today’s data centers, such as the rapid growth of data, the challenges of accommodating unpredictable workloads with traditional servers and rack systems, and the inherent inefficiency and outright waste that comes from provisioning servers that cannot address the needs of new-generation applications and those that are dedicated to running just one application. In this part, l will review the role of software-defined resources in ensuring that composable data centers are a realistic and cost-effective end goal for enterprise digital transformation.
How essential are software-defined resources to the composable data center?
A self-imposed incentive by most industries to transform themselves digitally is fueling the demand for a new infrastructure architecture. Lines of businesses are mandating IT organizations adopt a software-defined and service-centric approach to speed up IT provisioning, optimize application performance, and increase IT efficiency. The endgame will be to run the business with processes and operations that are…
Some recent acquisitions in the SDS market - Nutanix bought PernixData and Red Hat bought Permabit - highlight a cautionary adage I often heard when working with venture capitalists in the past. When evaluating the future prospects of a funding opportunity, VCs want to understand whether a new business idea is a standalone product or is really just a feature that will quickly be integrated into a platform (presumably owned and shipped by someone else).
Software-defined storage (SDS) is a high growth area that is bringing some strong benefits – better agility, easier storage management, and reduced CAPEX – to those IT organizations that have the requisite skill to deploy it effectively. Some recent acquisitions in the SDS market – Nutanix bought PernixData and Red Hat bought Permabit – highlight a cautionary adage I often heard when working with venture capitalists in my past that is particularly relevant for software products. When evaluating…
New approaches to infrastructure design are required for businesses to keep up with the amount of data that is generated, and whose timely analysis is of paramount importance for the business to remain competitive in the digital economy. Newer approaches to infrastructure must focus on efficiency to minimize budgetary shocks on IT departments, and agility to respond to business needs on-demand. Businesses are embracing new-generation applications to prepare themselves for the future, while maintaining current-gen applications that support revenue-generating operations.
Composable infrastructure technologies from vendors like TidalScale are designed with these key objectives in mind. They are designed to support both current and new generation of applications, thus enabling IT to better service revenue-generating operations while also supporting their business foray into the future. Crucially, Composable software solutions are software defined, and maximize return on investments in server hardware by pooling compute, memory and disk resources for maximum efficiency, utilization, and visibility across the entire datacenter, and not just a cluster of servers.
How does composable infrastructure add value in today’s modern IT environment?
Current-generation IT infrastructure can be rigid and siloed, making it difficult for IT to deliver quickly on the demands of new-generation applications going into production. As businesses embrace NGAs, they are adopting an application-centric approach to IT — building environments that require new levels of scale, automation, and flexibility. This model means a shift from a static and inflexible infrastructure to…
IoT is bridging the IT–OT divide rapidly. Data is no longer just under the purview of IT. Smart and connected devices, which are under the purview of OT, enable data collection, control and actuation, and enable additional IT-centric applications. The need to collect, store, and analyze data in a cost-efficient and timely manner means that IT and OT architecture and operations models need to converge and coexist. Software-defined OT (SD-OT) and IT–OT convergence are part of an “Intelligent Edge." Converged IT/OT Systems minimize data transfer between the core and edge, and carry out OT and IT functions seamlessly. SD-OT moves OT functions into the software running on industry-standard hardware. OT control and data acquisition functions are network-based and can be performed from the Core or anywhere at the Edge. Converging IT and OT means running IT and OT software on the same core and edge infrastructure tier and possibly on the same physical hardware.
This is an excerpt from an IDC Perspective posted on idc.com on the topic of SD-OT and Intelligent Edge. Link here.
Firms embark on Digital Transformation (DX) Initiatives by embracing a data-driven, analytics-first approach to improve business processes and increase operational efficiencies, better understand customer behavior and preferences, and build deeper customer, supplier and/or partner relationships, and more importantly, be prudent about how they gain insight from the data they…
Violin Memory, one of the early high flyers in the All Flash Array (AFA) space, filed for Chapter 11 bankruptcy in December 2016. This blog discusses some of the issues around their predicament, and takes a look at how the AFA market's use of custom flash modules (CFMs) (which Violin used in the Flash Storage Platform) has been impacted in enterprise-class arrays over the last couple of years.
The demise of Violin Memory, one of the early high flyers of the AFA market, has not been greatly exaggerated. The NYSE suspended trading in Violin Memory shares on October 28, 2016, and delisted its stock because it had not maintained an average global market capitalization of $15M over 30 consecutive trading days. From a revenue high of around $108M for their fiscal 2014, the company steadily shrank revenues in the wake of their September 2013 IPO. The company officially filed for Chapter 11…
The upcoming OpenStack Summit in Barcelona builds on a familiar theme: The unprecedented momentum that OpenStack has gained (and continues to gain) amongst firms of all shapes and sizes: enterprises, cloud and telecom services providers and hyperscalers. At the summit, the community will seek to showcase the fruits of streamlining product development and project coordination, maintaining currency with market trends, and more importantly that it is actively listening to its constituents of developers, end-users and vendors. IDC anticipates that their release message for the “Newton” release “One versatile platform” backed by key themes such as scalability, resiliency and user experience, will resonate stronger and louder with attendees, setting the stage for an even bigger footprint for OpenStack in 2017.
The OpenStack Summit in Barcelona to be held from October 25-27th 2016 could not been held at a more opportune time for the community. As 2016 draws to a close, it will no-doubt get labeled as a year that saw noteworthy mergers, splits, spin-offs and even “spin-mergers” – all of which will collectively leave an indelible mark on the industry landscape. Call it what you may, the fact is that much of the IT industry is being disrupted, and is responding accordingly. In such trying times, this…
As flash storage and network throughput evolve through the next several technology generations, a significant imbalance looms. As organizations decide which storage architecture they should go with - network storage or hyperconverged - it is important to understand how these two technologies are evolving in their own IT infrastructure.
Over the last several decades, processor performance improvements significantly outpaced storage performance improvements. All Flash Arrays (AFAs) which leverage flash media have helped to close this gap, which is one of the main drivers of their rapid penetration into mainstream datacenter infrastructure. Network performance is another key determinant in the actual performance that applications see, and existing network latencies and bandwidths have made flash performance accessible to them. …
At the recently held HPE Discover conference, HPE made Synergy - it's foray into Composable Infrastructure solutions. This announcement is timely as IDC is in the process of formalizing its research on Composable and Disaggregated Infrastructure. This blog post is meant to provide a quick take on how IDC's views this technology, and the impact it will have on the storage, server and networking markets.
HPE Synergy - HPE's foray into the world of Composable Infrastructure - was one of the key highlights of the 2016 HPE Discover event. While the product, portfolio and solutions were announced late last year, Discover set the stage for the strategic significance of the "Synergy" brand for HPE. Given that IDC is in the midst of its own research on the topic of Composable and Disaggregated Infrastructures, this blog post (a series of questions and answers) serves to provide a quick take on how we…
About this channel
- 532k views
- 81 articles
- 7 followers
IDC's Infrastructure and Data Management Blog is the home for IDC storage analysts to share their thoughts on technology, market and industry trends, announcements, movers and shakers, innovative ideas, and recent research.
- October 2012 2
- November 2012 6
- December 2012 2
- January 2013 6
- February 2013 7
- March 2013 3
- April 2013 8
- May 2013 2
- June 2013 4
- July 2013 1
- September 2013 2
- October 2013 1
- January 2014 1
- May 2014 3
- July 2014 3
- August 2014 6
- December 2014 1
- January 2015 1
- March 2015 1
- April 2015 1
- June 2015 1
- July 2015 1
- August 2015 2
- September 2015 1
- December 2015 1
- February 2016 1
- March 2016 2
- May 2016 1
- June 2016 1
- July 2016 1
- October 2016 1
- March 2017 1
- July 2017 2
- August 2017 3
- December 2017 1