Sinopsis
Storage developer Podcast, created by developers for developers.
Episodios
-
#106: Container Attached Storage (CAS) with openEBS
19/08/2019 Duración: 39minApplying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration. Learning Objectives: 1) Go over the modern day apps and their storage needs; under the notion of applications have changed someone forgot to tell storage; 2) What are the problems to use technologies like user space IO, in particular using technologies like SPDK among others; 3) Looking devops and the k8s model, how can we bring the power of user space storage in developers hands? Virtio for containers? direct access from the go run time for example SPDK?; 4) We have tried both, and like to share the outcome of this with you.
-
#105: Dual-Mode SSD Architecture for Next-Generation Hyperscale Data Centers
13/08/2019 Duración: 42minIncreasing proliferation of Artificial Intelligence, E-commerce, Big Data and Cloud applications is leading to highly diversified workloads and use cases in hyperscale data centers, which poses new challenges to solid state storage in terms of performance, flexibility and TCO optimizations. Moreover, there are increasing demands for software/hardware co-optimization and more control over I/O path from applications. Standard SSDs that are tuned for a few generic workloads cannot meet these challenges, resulting in suboptimal performance and TCO. We present our Dual-Mode SSD Architecture, a new storage architecture designed for our next-generation hyperscale data centers. We define our Open Channel SSD specification and build a Dual-Mode SSD platform that supports both Open Channel mode and standard NVMe mode. We develop our Open Channel software stack in full user space as well as in kernel space. Working seamlessly with our storage engine software, we build customized FTL solutions for different business app
-
#104: Introduction to Open-Channel/Denali Solid State Drives
05/08/2019 Duración: 38minThe talk covers (i) the characteristics of open-channel SSD management, (ii) introduces the new open-channel/Denali interface, (iii) provides background on LightNVM, the Linux subsystem that we designed and implemented for open-channel SSD management, and at last, we show the effectiveness of open-channel SSDs against state-of-art block-based SSDs. Learning Objectives: 1) Understand Open-Channel SSDs; 2) Project status; 3) Background on storage and applications.
-
#103: PCI Express: What’s Next for Storage
29/07/2019 Duración: 40minPCI Express® (PCIe®) 3.0 architecture has enabled Flash Storage to transition to high speed, low latency power efficient performance over the past few years. However, the hunger for additional performance in power constrained devices continues and PCI-SIG® continues its nearly three decade history of delivering performance doubling and additional features with the development of the PCIe 4.0 and PCIe 5.0 specifications. This presentation will review the major features of PCIe 4.0 and PCIe 5.0 technology, which will continue to enable power efficient performance required as NAND capacities scale and faster SCM (Storage Class Memories) become mainstream. Session attendees will gain insight into the current status of the PCIe 4.0 technology rollout and testing and will learn about the PCIe 5.0 specification development and timeline for completion in 2019. Learning Objectives: 1) Learn how PCIe is becoming the I/O of choice for storage; 2) Gain insight into the status of PCIe 4.0 roll-out for storage applicatio
-
#102: Achieving 10-Million IOPS from a single VM on Windows Hyper-V
15/07/2019 Duración: 44minMany server workloads, for example OLTP database workloads, require high I/O throughput and low latency. With the industry trend of moving high-end scale-up workloads to virtualization environment, it is essential for cloud providers and on-premises servers to achieve near native performance by reducing I/O virtualization overhead which mainly comes from two sources: DMA operations and interrupt delivery mechanism for I/O completions. The direct PCIe NVMe device assign techniques allow a VM to interact with HW devices directly and avoid using traditional Hyper-V para-virtualized I/O path. To improve interrupt handling in the virtualization environment, Intel introduces Posted Interrupts (PI) as an enhanced method to mitigate interrupt delivery overhead in a virtualized environment, bypassing hypervisor involvement completely. In this talk, we will present Microsoft implementation and optimization of Intel PI and Hyper-V direct PCIe NVMe access on Windows platform. The results showed that we were able to achi
-
#101: Introduction to Persistent Memory Configuration and Analysis Tools
08/07/2019 Duración: 44minHave you heard of non-volatile/persistent memory but don’t know how to get started with this disruptive technology? Memory is the new Storage. Next generation storage tiered architectures are evolving with persistent memory and hardware delivering NVDIMMs. Are you a Linux or Windows application developer familiar with C, C++, Java, or Python, keen to develop the next revolutionary application or modify an existing application, but not sure where to start? Do you know what performance and analysis tools can be used to identify optimizations in your app to take advantage of persistent memory? Are you a software, server, or cloud architect that wants to get a jump start on this disruptive technology? This presentation will get you started on the persistent memory solution path. The future is in your hands. The future is now! Learning Objectives: 1) We’ll deliver an introductory understanding of persistent memory, introduce the SNIA Programming Model, Direct Access (DAX) filesystems, and show where persistent me
-
#100: A Comparison of In-storage Processing Architectures and Technologies
24/06/2019 Duración: 39minIn-situ processing, in-storage processing, smart SSD, computational storage… Many names to define the same concept: a closer integration of computing capabilities and data storage in order to reduce data movement leading to better performance and lower power consumption. This is a new trend in storage and computing architectures. How can we define this new type of products: storage with embedded processing, or processing with embedded storage? This talk presents an analysis of the in-storage processing trend, including a comparison of different architectures, the presentation of a computing and storage technologies roadmap, and a list of applications use cases. Learning Objectives: 1) In-storage processing architectures; 2) Technology roadmap for integration; 3) Application use cases.
-
#99: SNIA Nonvolatile Memory Programming TWG - Remote Persistent Memory
17/06/2019 Duración: 47minThe SNIA NVMP Technical Workgroup (TWG) continues to make significant progress on defining the architecture for interfacing applications to PM. In this talk, we will focus on the important Remote Persistent Memory scenario, and how the NVMP TWG’s programming model applies. Application use of these interfaces, along with fabric support such as RDMA and platform extensions, are part of this, and the talk will describe how the larger ecosystem fits together to support PM as low-latency remote storage. Learning Objectives: 1) Persistent Memory programming; 2) RDMA extensions; 3) SNIA PM initiatives.
-
#98: Rethinking Ceph Architecture for Disaggregation Using NVMe-over-Fabrics
10/06/2019 Duración: 34minCeph protects data by making 2-3 copies of the same data but that means 2-3x more storage servers and related costs. It also means higher write latencies as data hops between OSD nodes. Customers are now starting to deploy Ceph using SSDs for high-performance workloads and for data lakes supporting real-time analytics. We describe a novel approach that eliminates the added server cost by creating Containerized, stateless OSDs and leveraging NVMe-over-fabrics to replicate data in server-less storage nodes. We propose redefining the boundaries of separation within SDS architectures to address disaggregation overheads. Specifically, we decouple control and data plane operations and transfer block ownership to execute on remote storage targets. It also dramatically reduces write latency to enable Ceph to be used for databases and to speed up large file writes. As part of the solution, we also describe how OSD node failover is preserved via a novel mechanism using standby stateless OSD nodes. Learning Objectives:
-
#97: Delivering Scalable Distributed Block Storage using NVMe over Fabrics
03/06/2019 Duración: 48minNVMe and NVMe over Fabrics (NVMe-oF) protocols provide a highly efficient access to flash storage inside a server and over the network respectively. Current generation of distributed storage software stacks use proprietary protocols which are sub-optimal to deliver end to end low latency. Moreover it increases operational complexity to manage NVMe-oF managed flash storage and distributed flash storage in private cloud infrastructure. In this session, we present NVMe over Fabrics based high performance distributed block storage that combines the best of both worlds to deliver performance, elasticity and rich data services. Learning Objectives: 1) NVMe, NVMe-oF for flash data path IO architecture; 2) Programming, architecture and optimization for flash; 3) Distributed storage, data services.
-
#96: Solid State Datacenter Transformation
20/05/2019 Duración: 48minIntel Fellow Amber Huffman has been at the center of Intel’s development for SSDs, with emphasis on SSD storage interfaces and next generation form factors. In this talk, she will discuss the rationale behind decisions that were made in advancing storage architecture leading to the emergence of solid state only Data Centers. Amber will also discuss the key factors influencing the future of the Data Center and the important role storage continues to play.
-
#95: Tunneling through Barriers
13/05/2019 Duración: 44minJoin Dr. Andy Walker for a wonderfully illustrated tour through 90 years of physics and materials science, leading to modern solid state memory technologies via “The Golden Thread of Tunneling.”
-
#94: Key Value Storage Standardization Progress
06/05/2019 Duración: 47minNVMe KV is a proposal for a new command structure to access data on an NVMe controller that is being developed within the NVMe technical working group. This proposed command set provides a key and a value to store data on the Non Volatile media and provides a key to retrieve data stored on the media. In addition to the work on the NVMe specification, the SNIA is also working on a KeyValue API. This presentation will describe the standardization efforts going on in both the NVMe working group and SNIA. Learning Objectives: 1) What is Key Value Storage; 2) Where are we at in the NVMe standardization process of NVMe KV; 3) Where are we at in the SNIA standardization process of a KV API?
-
#93: Redfish Ecosystem Update
30/04/2019 Duración: 52minThe DMTF’s Redfish is an open industry standard specification and schema that specifies a RESTful interface and utilizes JSON and OData to help customers integrate solutions within their existing tool chains. Designed to meet the expectations of end users for simple, modern and secure management of scalable platform hardware, the Redfish method and approach has seen accelerated adoption across the IT infrastructure management standards community. This session provides an overview of the Redfish standard as well as information about what’s new in the Redfish ecosystem, including PLDM for RDE (Redfish Device Enablement) and other DMTF updates as well as consideration and adoption in the broader standards community. This session will then cover the general data model including the base storage models and infrastructure that are used by the SNIA Swordfish extensions. Learning Objectives: 1) Basic understanding of the Redfish data model hierarchy; 2) Knowledge of the problem areas to which the Redfish methodolo
-
#92: Fibre Channel – The Most Trusted Fabric Delivers NVMe
16/04/2019 Duración: 32minAs data-intensive workloads transition to low-latency NVMe flash-based storage to meet increasing user demand, the Fibre Channel industry is combining the lossless, highly deterministic nature of Fibre Channel with NVMe. FC-NVMe targets the performance, application response time, and scalability needed for next-generation data centers while leveraging existing Fibre Channel infrastructures. This presentation will provide an overview of why Fibre Channel’s inherent multi-queue capability, parallelism, deep queues, and battle-hardened reliability make it an ideal transport for NVMe across the fabric. Learning Objectives: 1) A reminder of how Fibre Channel works; 2) A reminder of how NVMe over Fabrics work; 3) A high-level overview of Fibre Channel and NVMe, especially how they work together.
-
#91: Memory Class Storage and its Impact
08/04/2019 Duración: 52minNantero NRAM™ is a new class of memory with the potential to add non-volatility to existing RAM applications. It can be arranged in a crosspoint structure for large memories or a 1T-nR arrangement for smaller faster arrays, in standalone devices or as embedded RAM. NRAM uses carbon nanotubes in a dielectric-free structure to achieve unlimited write endurance. While there are obvious advantages to this class of device, including replacing DRAM in storage devices, there are a number of less obvious changes to how designers approach the data storage hierarchy. Decoupling cache size from battery backup power lets designers rethink performance profiles. Exploitation of various interfaces into the system are examined, from SATA to PCIe to the many options in the DRAM bus. This presentation also explores the growing application space for artificial intelligence, deep learning, and in-memory computing and considers the impact of a high performance non-volatile memory in those use cases. Learning Objectives: 1) Awar
-
#90: FPGA Accelerator Disaggregation Using NVMe-over-Fabrics
25/03/2019 Duración: 37minLeveraging the NVMe standard to present FPGA accelerators as NVMe namespaces allows user space code to access accelerators as simple block devices via standard in-box drivers. NVMe-over-Fabrics (NVMe-oF) is a network protocol for NVMe that is used to communicate between a host machine and NVMe devices over high-performance Ethernet networks. Leveraging NVMe-oF allows an accelerator, that presents as a standard NVMe namespace, to be shared across existing transports such as RDMA, TCP/IP or Fibre Channel. With NVMe-oF, client machines can borrow accelerators over the fabrics connection which can then be accessed as if they were local to the system, allowing the same user code to run on direct attached accelerators and over-Fabrics accelerators. The ease and granularity available when setting up the over-fabrics connections allows servers to share accelerators on demand, allowing for the disaggregation of acceleration compute resources. Using NVMe Controller Memory Buffer (CMB), the fabrics connection is further
-
#89: OpenSDS Flash Manageability using Swordfish for Cloud-native Frameworks
20/03/2019 Duración: 52minCloud-native frameworks are becoming a de-facto choice for deploying scalable micro-services. Flash based storage manageability in the container frameworks need to evolve to deliver scalable storage services to container workloads. OpenSDS (Linux Foundation project) aims to provide a unified software-defined storage control plane to simplify flash storage manageability to service on-prem and cloud micro-services. In this session, we present NVMe and NVMe over Fabrics (NVMe-oF) manageability architecture in OpenSDS using Swordfish APIs, a case study with Kubernetes and community roadmap to deliver unified flash manageability vision. Learning Objectives: 1) Flash manageability using Swordfish APIs; 2) OpenSDS Linux Foundation project overview (unified storage control plane); 3) Container flash driver integration; 4) Kubernetes.
-
#88: Swimming With SNIA Swordfish
11/03/2019 Duración: 52minThe SNIA’s Scalable Storage Management Technical Work Group (SSM TWG) has developed an open industry standard specification that provides a unified approach for the management of storage systems and data services. Swordfish™ is an extension of the DMTF Redfish specification developed by the Storage Networking Industry Association (SNIA) to provide a unified approach for the management of storage equipment and services in converged, hyper-converged, hyperscale and cloud infrastructure environments, making it easier for IT administrators and DevOps to integrate scalable solutions into their data centers. This session will present an overview of the SNIA Swordfish specification, and will show how Swordfish takes and extends the Redfish specification to deliver the Swordfish storage model. It will also cover the drivers for the SNIA Swordfish approach, as well as providing a comprehensive overview of the functionality included in the Swordfish specification. It will also present an overview of common storage man
-
#87: Latest developments with NVMe/TCP
04/03/2019 Duración: 51minNVMe over Fabrics is a powerful standard that provides fast access to non-volatile memory devices across fabric interconnects. An emerging NVMe-oF transport is good old TCP/IP. Its benefits are obvious, as it is fast, scalable, well-understood, and extremely simple to deploy. TCP/IP is, after all, the most widely used network protocol of them all, and well known and heavily-implemented in every data center. The major question is how to achieve high performance and low latency with TCP/IP. As most designers know, TCP/IP has pitfalls, such as timeout-based retranmissions and incast. However, performance results are very promising. The NVMe Technical Working Group is building both the standard and its open-source implementation in concert. Its prominent features include data integrity and transport layer security. In practice, NVMe/TCP is an excellent transport for networked flash and complements NVMe/RDMA and NVMe/FC nicely.