Reminders

Latency and Risk - Takeaways from the HPC Summit for Wall Street 2012

By April 9, 2012
OfflineMichael Versace

Risk and analytics, two hot topics on Wall Street, dominated the distributed storage and open-networking discussions at last weeks' 2012 High Performance Computing for Wall Street conference in New York.

At this show, all participants clearly understood the connection between latency and risk, and they translated this understanding into a healthy, veracious interest into solutions that increase the speed of their trading operations and reduce latency from the trading desk through to back office operations.   As lower price volatility and trading volumes squeeze profits, and new regulations loom that require more reporting and analysis of historical data, firms represented at the event continued to seek out technical infrastructures for their operations that either removes complexity and cost (through standards and shared services models, for example) or provides significant performance gains straight through to the application and the user experience.  At this event I moderated a panel on the following premise - the ability of servers to process high volumes of data coupled with advances in the bandwidth of storage media has begun to reorder traditional storage architectures. In large server environments, having compute resources and storage resources on separate ends of a uniform network is no longer practical. Our panel, comprised of participants from the Chicago Mercantile Exchange, former Credit Suisse architects at Oktay, LSI, and IBM, looked into the advent of distributed storage, caching strategies at the server and network layer, and new compute frameworks including Openflow, and how software engineers can look to accelerate application performance.

Take-aways from the panel and the packed audience:

  • The mandate for application performance (remember the model of "latency=risk") on Wall Street is erasing the divide between storage and servers
    • Performance mandates are driving flash-based storage, database performance,  IO performance, application acceleration strategies
    • Flash based storage must be architected "in" at one of two locations, on servers as either internal or as network-based storage arrays.
  • The predominate application use case is for these emerging capabilities is trading risk & bank book analytics
    • Multi-class Asset Modeling, Portfolio Valuation, Simulations
    • Liquidity Forecasting and Testing, High Frequency Trading
  • The metrics are different for these IO intensive, high performance financial applications
    • Cost per IOPS (input/outputs per second) vs. cost per storage capacity
  • Infrastructure Considerations
    • Flash cache options (Tier 0) in servers, storage arrays, then consider traditional tiered storage strategy
    • Through emerging standards from the open network community, (e.g., Openflow) software-defined networking and the performance functions of a switch or router can now be controlled by application developers outside the device via a standard API.

So at the end of the day, the audience came away with the sense that by pushing the fastest storage to as close to the application as possible and by placing more network performance control at the hands the software designers, Wall Street will ultimately reduce latency inherent in the technical infrastructure, and in turn, reduce risk and improve user experience.  But this is certainly a not one-size-fits all strategy.  At a minimum, to get this risk benefit, CIO's will require active engagement with storage and networking infrastructure owners (the CTOs) to align applications and data with the value they bring to the organization, and continue to architect and integrate Wall Street applications with an expanding set of storage and network performance-enhanced capabilities.

0 Comments

Post a comment

Add Comment

Viewed 754 times