Storage Developer Conference

  • Autor: Vários
  • Narrador: Vários
  • Editor: Podcast
  • Duración: 115:54:10
  • Mas informaciones

Informações:

Sinopsis

Storage developer Podcast, created by developers for developers.

Episodios

  • #146: Understanding Compute Express Link

    25/05/2021 Duración: 41min

    Compute Express Link™ (CXL™) is an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. Datacenter architectures are evolving to support the workloads of emerging applications in Artificial Intelligence and Machine Learning that require a high-speed, low latency, cache-coherent interconnect. The CXL specification delivers breakthrough performance, while leveraging PCI Express® technology to support rapid adoption. It addresses resource sharing and cache coherency to improve performance, reduce software stack complexity, and lower overall systems costs, allowing users to focus on target workloads. Attendees will learn how CXL technology maintains a unified, coherent memory space between the CPU (host processor) and CXL devices allowing the device to expose its memory as coherent in the platform and allowing the device to directly cache coherent memory. This allows both the CPU and device to share resources for higher performance and reduced software stack complexit

  • #145: The Future of Accessing Files Remotely from Linux: SMB3.1.1 Client Status Update

    30/04/2021 Duración: 45min

    Improvements to the SMB3.1.1 client on Linux have continued at a rapid pace over the past year. These allow Linux to better access Samba server, as well as the Cloud (Azure), NAS appliances, Windows systems, Macs and an ever increasing number of embedded Linux devices including those using the new smb3 kernel server Linux (ksmbd). The SMB3.1.1 client for Linux (cifs.ko) continues to be one of the most actively developed file systems on Linux and these improvements have made it possible to run additional workloads remotely. The exciting recent addition of the new kernel server also allows more rapid development and testing of optimizations for Linux. Over the past year, performance has dramatically improved with features like multichannel (allowing better parallelization of i/o and also utilization of multiple network devices simultaneously), with much faster encryption and signing, with better use of compounding and improved support for RDMA. Security has improved and alternative security models are now possi

  • #144: Key Value Standardized

    06/04/2021 Duración: 50min

    The NVMe Key Value (NVMe-KV) Command Set has been standardized as one of the new I/O Command Sets that NVMe Supports. Additionally, SNIA has standardized a Key Value API that works with the NVMe Key Value allows access to data on a storage device using a key rather than a block address. The NVMe-KV Command Set provides the key to store a corresponding value on non-volatile media, then retrieves that value from the media by specifying the corresponding key. Key Value allows users to access key-value data without the costly and time-consuming overhead of additional translation tables between keys and logical blocks. This presentation will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards, and present open source work that is available to take advantage of Key Value storage. Learning Objectives: Present the standardization of SNIA KV API,Present the standardization of NVMe Key Value Command Set,Present the benefits of Ke

  • #143: Deep Compression at Inline Speed for All-Flash Array

    31/03/2021 Duración: 35min

    The rapid improvement of overall $/Gbyte has driven the high performance All-Flash Array to be increasingly adopted in both enterprises and cloud datacenters. Besides the raw NAND density scaling with continued semiconductor process improvement, data reduction techniques have and will play a crucial role in further reducing the overall effective cost of All-Flash Array. One of the key data reduction techniques is compression. Compression can be performed both inline and offline. In fact, the best All-Flash Arrays often do both: fast inline compression at a lower compression ratio, and slower, opportunistic offline deep compression at significantly higher compression ratio. However, with the rapid growth of both capacity and sustained throughput due to the consolidation of workloads on a shared All-Flash Array platform, a growing percentage of the data never gets the opportunity for deep compression. There is a deceptively simple solution: Inline Deep Compression with the additional benefits of reduced flash w

  • #142: ZNS: Enabling in-place Updates and Transparent High Queue-Depths

    16/03/2021 Duración: 45min

    Zoned Namespaces represent the first step towards the standardization of Open-Channel SSD concepts in NVMe. Specifically, ZNS brings the ability to implement data placement policies in the host, thus providing a mechanism to lower the write-amplification factor (WAF), (ii) lower NAND over-provisioning, and (iii) tighten tail latencies. Initial ZNS architectures envisioned large zones targeting archival use cases. This motivated the creation of the "Append Command” - a specialization of nameless writes that allows to increase the device I/O queue depth over the initial limitation imposed by the zone write pointer. While this is an elegant solution, backed by academic research, the changes required on file systems and applications is making adoption more difficult. As an alternative, we have proposed exposing a per-zone random write window that allows out-of-order writes around the existing write pointer. This solution brings two benefits over the “Append Command”: First, it allows I/Os to arrive out-of-order w

  • #141: Unlocking the New Performance and QoS Capabilities of the Software-Enabled Flash API

    02/03/2021 Duración: 51min

    The Software-Enabled Flash API gives unprecedented control to application architects and developers to redefine the way they use flash for their hyperscale applications, by fundamentally redefining the relationship between the host and solid-state storage. Dive deep into new Software-Enabled Flash concepts such as virtual devices, Quality of Service (QoS) domains, Weighted Fair Queueing (WFQ), Nameless Writes and Copies, and controller offload mechanisms. This talk by KIOXIA (formerly Toshiba Memory) will include real-world examples using the new API to define QoS and latency guarantees, workload isolation, minimize write amplification by application-driven data placement, and achieve higher performance with customized flash translation layers (FTL). Learning Objectives: Provide an in-depth dive into using the Software Enabled Flash API,Map application workloads to Software Enabled Flash structures,Understand how to implement QoS requirements using the API.

  • #140: Introduction to libnvme

    11/02/2021 Duración: 45min

    The NVM Express workgroup is introducing new features frequently, and the Linux kernel supporting these devices evolves with it. These ever moving targets create challenges when developing tools when new interfaces are created, or older ones change. This talk will provide information on some of these recent features and enhancements, and introduce the open source 'libnvme' project which implements an open source library available in public git repositories that provides access to all NVM Express features with convenient abstractions to the kernel interfaces interacting with your devices. The session will demonstrate integrating the library with other programs, and also provide an opportunity for the audience to share what additional features they would like to see out of this common library in the future. Learning Objectives: Explain protocol and host operating system interaction complexities,Introduce libnvme and how it manages those relationships,Demonstrate integration with applications.

  • #139: Use Cases for NVMe-oF for Deep Learning Workloads and HCI Pooling

    02/02/2021 Duración: 58min

    The efficiency, performance and choice in NVMe-oF is enabling some very unique and interesting use cases – from AI/ML to Hyperconverged Infrastructures. Artificial Intelligence workloads process massive amounts of data from structured and from unstructured sources. Today most deep learning architectures rely on local NVMe to serve up tagged and untagged datasets into map-reduce systems and neural networks for correlation. NVMe-oF for Deep Learning infrastructures enables a shared data model to ML/DL pipelines without sacrificing overall performance and training times. NVMe-oF is also enabling HCI deployment to scale without adding more compute, enabling end customers to reduce dark flash and reduce cost. The talk explores these and several innovative technologies driving the next storage connectivity revolution. Learning Objectives: Storage architectures for Deep Learning Workloads,Extending the reach of HCI platforms using NVMe-oF,Ethernet Bunch of Flash architectures.

  • #138: NVMe 2.0 Specification Preview

    12/01/2021 Duración: 53min

    NVMe is the fastest growing storage technology of the last decade and has succeeded in unifying client, hyperscale and enterprise applications into a common storage framework. NVMe has evolved from a being a disruptive technology to becoming a core element in storage architectures. In this session, we will talk about the NVMe transition to a merged base specification inclusive of both NVMe and NVMe-oF architectures. We will provide an overview of the latest NVMe technologies, summarize the NVMe standards roadmap and describe the latest NVMe standardization initiatives. NVMe technology will present a number of areas of innovation that preserve our simple, fast, scalable paradigm while extending the broad appeal of NVMe architecture. These continued innovations will ready the NVMe technology ecosystem for yet another period of growth and expansion. Learning Objectives: Learn about the NVMe transition to a merged base specification inclusive of both NVMe and NVMe-oF architectures. Receive a summary of the NVMe

  • #137: Caching on PMEM: an Iterative Approach

    17/12/2020 Duración: 43min

    With PMEM boasting a much higher density and DRAM-like performance, applying it to in-memory caching such as memcached seems like an obvious thing to try. Nonetheless, there are questions when it comes to new technology. Would it work for our use cases, in our environment? How much effort does it take to find out if it works? How do we capture the most value with reasonable investment of resource? How can we continue to find a path forward as we make discoveries? At Twitter, we took an iterative approach to explore cache on PMEM. With significant early help from Intel, we started with simple tests in memory mode in a lab environment, and moved on to app_direct mode with modifications to Pelikan (pelikan.io), a modular open-source cache backend developed by Twitter. With positive results from the lab runs, we moved the evaluation to platforms that more closely represent Twitter’s production environment, and uncovered interesting differences. With better understanding of how Twitter’s cache workload behaves on

  • #136: Introducing SDXI

    01/12/2020 Duración: 39min

    Software-based memory-to-memory data movement is common, but takes valuable cycles away from application performance. At the same time, offload DMA engines are vendor-specific and may lack capabilities around virtualization and user-space access. This talk will focus on how SDXI(Smart Data Acceleration Interface), a newly formed SNIA TWG is working to bring an extensible, virtualizable, forward-compatible, memory to memory data movement and acceleration interface specification. As new memory technologies get adopted and memory fabrics expand the use of tiered memory, data mover acceleration and its uses will increase. This TWG will encourage adoption and extensions to this data mover interface. Learning Objectives: A new proposed standard for a memory to memory data movement interface,A new TWG to develop this standard,Usecases where this will apply to evolving storage architecture with memory pooling and persistent memory

  • #135: SmartNICs and SmartSSDs, the Future of Smart Acceleration

    04/11/2020 Duración: 50min

    Since the advent of the Smart Phone over a decade ago, we've seen several new "Smart" technologies, but few have had a significant impact on the data center until now. SmartNICs and SmartSSDs will change the landscape of the data center, but what comes next? This talk will summarize the state of the SmartNIC market by classifying and discussing the technologies behind the leading products in the space. Then it will dive into the emerging technology of SmartSSDs and how they will change the face of storage and solutions. Finally, we'll dive headfirst into the impact of PCIe 5 and Compute Express Link (CXL) on the future of Smart Acceleration on solution delivery. Learning Objectives: Understand the current state of the SmartNIC market & leading products.,Introduce the concept of SmartSSDs and two products available today.,Discuss the future of Device to Device (D2D) communications using PCIe, CXL/CCIX.,Lay out a vision for where composable solutions leveraging multiple devices on a PCIe buss communicating dir

  • #134: Best Practices for OpenZFS L2ARC in the Era of NVMe

    07/10/2020 Duración: 53min

    The ZFS L2ARC is now more than 10 years old. Over that time, a lot of secret incantations and tribal knowledge have been created by users, testers, developers, and the odd sales or marketing person. That collection of community wisdom informs the use and/or tuning of ZFS L2ARC for certain IO profiles, dataset sizes, server class, share protocols, and device types. In this talk, we will review a case study in which we tested a few of these L2ARC myths on an NVMe-capable OpenZFS storage appliance. Can high-speed NVMe flash devices keep L2ARC relevant in the face of ever-increasing memory capacity for ARC (primary cache) and all-flash storage pools? Learning Objectives: 1) Overview of ZFS L2ARC design goals and high level implementation details that pertain to our findings; 2) Performance characteristics of L2ARC during warming and when warmed, plus any tradeoffs or pitfalls with L2ARC in these states; 3) How to leverage NVMe as L2ARC devices to improve performance in a few storage use cases.

  • #133: NVMe based Video and Storage solutions for Edged based Computational Storage

    02/09/2020 Duración: 40min

    5G Wireless technology will bring vastly superior data rates to the edge of the network. However, with this increase in bandwidth will come applications that significantly increase overall network throughput. Video applications will likely explode as end users have large amounts of data bandwidth to operate. Video will not only require advanced compression but will require large amounts of data storage. Combining advanced compression technologies with storage will allow a high density of storage and compression in a small amount of rack space with little power, ideal for placement at the edge of the network. NVMe based module provides the opportunity to use computational storage elements to enable edge compute and video compression. This presentation will provide technical details and various options to combine video and storage on an NVMe interface. Further, it will explore how this NVMe device can be virtualized for both storage and video in an edge compute environment. Learning Objectives: 1) Understand

  • #132: Emerging Scalable Storage Management Functionality

    10/08/2020 Duración: 38min

    By now, you have a good understanding of SNIA Swordfish™ and how it extends the DMTF Redfish® specification to manage storage equipment and services. Attend this presentation to learn what’s new and how the specification has evolved since last year. The speaker will share the latest updates ranging from details of features and profiles to new vendor-requested functionality that’s supporting the specification from direct-attached to NVMe. You won’t want to miss this opportunity to be brought up-to-speed. Learning Objectives: 1) Educate the audience on what’s new with Swordfish; 2) Describe features and profiles and why they are useful; 3) Provide an overview of vendor-requested Swordfish functionality.

  • #131: Redfish Ecosystem for Storage

    04/08/2020 Duración: 55min

    DMTF's Redfish® is a standard API designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Both human readable and machine capable, Redfish leverages common Internet and web services standards to expose information directly to the modern tool chain. This presentation will provide an overview of Redfish, what’s new in the Redfish ecosystem, as well as adoption in the broader standards community. You’ll also learn more about the general Redfish data model, including the base storage models and infrastructure that are used by SNIA Swordfish extensions. Learning Objectives: 1) Introduce the DMTF Redfish API; 2) Provide an update on the latest Redfish developments; 3) Understand how SNIA Swordfish builds on Redfish.

  • #130: SNIA Nonvolatile Memory Programming TWG

    27/07/2020 Duración: 52min

    The SNIA NVMP TWG continues to make significant progress on defining the architecture for interfacing applications to PM. In this talk, we will focus on the important Remote Persistent Memory scenario, and how the NVMP TWG’s programming model applies. Application use of these interfaces, along with fabric support such as RDMA and platform extensions, are part of this, and the talk will describe how the larger ecosystem fits together to support PM as low-latency remote storage.

  • #129: So, You Want to Build a Storage Performance Testing Lab?

    13/07/2020 Duración: 55min

    Whether you are a storage vendor, consumer, or developer, the performance of storage solutions affects you. Assessing the performance of large and complex storage solutions requires some level of performance testing lab, and there are many factors to consider. From network topology to load generator CPU, all components must be selected and configured with care to avoid unintended bottlenecks. In this session, we will review a few best practices and lessons learned, including: whether virtual clients are feasible and my experiences attempting performance testing on several different hypervisors, best practices for network configuration, and how to use maximum effective data rates to avoid unintended bottlenecks. Finally, we will conclude with a review of data comparing different physical load generating hardware and its effect on measured performance. Learning Objectives: 1) Effect of load generating client hardware on measured performance; 2) Avoiding unintended bottlenecks by using interconnect maximum eff

  • #128: Surfing the World Wide File

    15/06/2020 Duración: 39min

    SMB 3.1.1 is the state of the art for secure remote file access, but deploying it for clouds and mobile users can be very challenging; TCP/445 is often blocked, networks are often slow, and edge file servers are often feared. The Microsoft SMB3 team has now built the first implementation of SMB3 over QUIC, a UDP/TLS transport pioneered by Google. This allows secure tunneling of SMB3 over internet-friendly ports. Furthermore, we have added compression for SMB3, which allows significant data savings over congested and low bandwidth networks. In this talk we’ll discuss these new options, as well as other recent security and feature capabilities nearing completion. Learning Objectives: 1) SMB3 over new transport; 2) SMB3 over wide area networks; 3) SMB3 protocol update.

  • #127: Object Storage Workload Testing Tools

    09/06/2020 Duración: 47min

    Attendees of this presentation will learn how to use several open source tools ( https://github.com/jharriga/ ) to evaluate object storage platforms. These tools provide automation and customer-based object storage workloads for activities such as filling a cluster, aging a cluster and running steady-state mixed operation workloads. One of the tools, RGWtest, automates pool creation, logs cluster statistics such as system resource utilization (CPU and memory) and submits workloads through COSbench - Intel’s open source object storage benchmark tool. A demonstration of the tools will be part of the presentation. Learning Objectives: 1) How to install, configure and execute the object storage workload tools; 2) How to interpret workload run results; 3) How to design and size object storage workloads.

página 1 de 8