Download presentation
Presentation is loading. Please wait.
Published byPedro Henrique Garrau Azenha Modified over 6 years ago
1
Cell Programming Workshop Cell/Quasar Ecosystem & Solutions Enablement
Cell Software Model Cell Programming Workshop Cell/Quasar Ecosystem & Solutions Enablement Cell Programming Workshop 11/14/2018
2
Class Objectives – Things you will learn
Programming models that exploit cell features Cell Programming Workshop 11/14/2018
3
Class Agenda Cell Programming Model Overview PPE Programming Models
SPE Programming Models Parallel Programming Models Multi-tasking SPEs Cell Software Development Flow References Michael Day, Ted Maeurer, and Alex Chow, Cell Software Overview Trademarks - Cell Broadband Engine and Cell Broadband Engine Architecture are trademarks of Sony Computer Entertainment, Inc. Cell Programming Workshop 11/14/2018
4
The role of Cell programming models
Cell provides a massive computational capacity. Cell provides a huge communicational bandwidth. The resources are distributed. A properly selected Cell programming model provides a programmer a systematic and cost-effective framework to apply Cell resources to a particular class of applications. A Cell programming model may be supported by language constructs, runtime, libraries, or object-oriented frameworks. Peak performance (single precision): > 200 GFlops (Pentium D: ~ 30 GFlops) EIB – 98/96 bytes/cycle AutoPartitioning (divides work between PE & SPE and Multiple SPEs); Partition for Overlay/Tiling for code larger than 256k for the local store load using OpenMP; Data partitioning using software managed cache Cell Programming Workshop 11/14/2018
5
Cell programming models
Single Cell environment: PPE programming models SPE Programming models Small single-SPE models Large single-SPE models Multi-SPE parallel programming models Cell Embedded SPE Object Format (CESOF) SPE LS PPE thread Large small Multi-SPE BE-level Effective Address Space With the computational efficiency of the SPEs, software developers can create programs that manage dataflow as opposed to leaving dataflow to a compiler or to later optimizations. Cell Programming Workshop 11/14/2018
6
Cell programming models - continued
Multi-tasking SPEs Local Store resident multi-tasking Self-managed multi-tasking Kernel-managed SPE scheduling and virtualization Final programming model points Cell Programming Workshop 11/14/2018
7
PPE Programming Model Cell Programming Workshop 11/14/2018
8
PPE programming model (participation)
PPE is a 64-bit PowerPC core, hosting operating systems and hypervisor PPE program inherits traditional programming models Cell environment: a PPE program serves as a controller or facilitator CESOF support provides SPE image handles to the PPE runtime PPE program establishes a runtime environment for SPE programs e.g. memory mapping, exception handling, SPE run control It allocates and manages Cell system resources SPE scheduling, hypervisor CBEA resource management It provides OS services to SPE programs and threads e.g. printf, file I/O Cell Programming Workshop 11/14/2018
9
Single SPE Programming Model
Cell Programming Workshop 11/14/2018
10
Small single-SPE models
Single tasked environment Small enough to fit into a 256KB- local store Sufficient for many dedicated workloads Separated SPE and PPE address spaces – LS / EA Explicit input and output of the SPE program Program arguments and exit code per SPE ABI DMA Mailboxes SPE side system calls Foundation for a function offload model or a synchronous RPC model Facilitated by interface description language (IDL) SPE threads follow the M:N thread model, meaning M threads distributed over N processor elements. SPE threads run to completion. The SDK Linux kernel supports a run-to-completion model, except for certain preemptive debugging services. ABI – App. Binary Interface RPC Model is implemented using stubs as proxies. The main code on the PPE contains a stub for each remote procedure on the SPEs. Each procedure on an SPE has a stub that takes care of running the procedure and communicating with the PPE. The Interface Definition Language (IDL) compiler, available in the SDK, facilitates the RPC function offload. The stub code, together with the runtime code, controls the execution, data transfer, and program coordination between the PPE and SPE during program execution. A procedure is loaded onto an SPE only once, and the program on the PPE can then make multiple calls to that procedure without having to reload it. When the program on the PPE calls a remote procedure, it actually calls that procedure’s stub located on the PPE. The stub code initializes the SPE with the necessary data and code, packs the procedure’s parameters, and sends a mailbox message to the SPE to start its stub procedure. The RPC runtime library coordinates the interaction between the PPE and the SPEs, and it manages the task queues. PPE requests for SPE executions are represented in the RPC runtime code as task structure. On invocation of a remote procedure call, the PPE program can either wait for the procedure to return (synchronous execution), or continue processing and synchronize with the procedure later (asynchronous execution). Whether a remote procedure is synchronous or asynchronous is specified by the procedure’s definition in the IDL file. Cell Programming Workshop 11/14/2018
11
Small single-SPE models – tools and environment
SPE compiler/linker compiles and links an SPE executable The SPE executable image is embedded as reference-able RO data in the PPE executable (CESOF) A Cell programmer controls an SPE program via a PPE controlling process and its SPE management library i.e. loads, initializes, starts/stops an SPE program The PPE controlling process, OS/PPE, and runtime/(PPE or SPE) together establish the SPE runtime environment, e.g. argument passing, memory mapping, system call service. Cell Programming Workshop 11/14/2018
12
Small single-SPE models – a sample
/* spe_foo.c: A C program to be compiled into an executable called “spe_foo” */ int main( int speid, addr64 argp, addr64 envp) { char i; /* do something intelligent here */ i = func_foo (argp); /* when the syscall is supported */ printf( “Hello world! my result is %d \n”, i); return i; } Cell Programming Workshop 11/14/2018
13
Small single-SPE models – PPE controlling program
extern spe_program_handle spe_foo; /* the spe image handle from CESOF */ int main() { int rc, status; speid_t spe_id; /* load & start the spe_foo program on an allocated spe */ spe_id = spe_create_thread (0, &spe_foo, 0, NULL, -1, 0); /* wait for spe prog. to complete and return final status */ rc = spe_wait (spe_id, &status, 0); return status; } speid_t spe_create_thread(spe_gid_t gid, spe_program_handle_t *spe_program_handle, void *argp, void *envp, unsigned long mask, int flags) gid Identifier of the SPE group that the new thread will belong to. SPE group identifiers are returned by spe_create_group. The new SPE thread inherits scheduling attributes from the designated SPE group. If gid is equal to SPE_DEF_GRP (0), then a new group is created with default scheduling attributes, as set by calling spe_group_defaults. spe_program_handle Indicates the program to be executed on the SPE. This is an opaque pointer to an SPE ELF image which has already been loaded and mapped into system memory. This pointer is normally provided as a symbol reference to an SPE ELF executable image which has been embedded into a PPE ELF object and linked with the calling PPE program. This pointer can also be established dynamically by loading a shared library containing an embedded SPE ELF executable, using dlopen(2) and dlsym(2), or by using the spe_open_image function to load and map a raw SPE ELF executable. argp An (optional) pointer to application specific data, and is passed as the second parameter to the SPE program. envp An (optional) pointer to environment specific data, and is passed as the third parameter to the SPE program. mask The processor affinity mask for the new thread. Each bit in the mask enables (1) or disables (0) thread execution on a cpu. At least one bit in the affinity mask must be enabled. If equal to -1, the new thread can be scheduled for execution on any processor. The affinity mask for an SPE thread can be changed by calling spe_set_affinity, or queried by calling spe_get_affinity. flags A bit-wise OR of modifiers that are applied when the new thread is created. The following values are accepted: 0 No modifiers are applied SPE_CFG_SIGNOTIFY1_OR Configure the Signal Notification 1 Register to be in “logical OR” mode instead of the default “Overwrite” mode. SPE_CFG_SIGNOTIFY2_OR Configure the Signal Notification 1 Register to be in “logical OR” mode instead of the default “Overwrite” mode. SPE_MAP_PS Request permission for memory-mapped access to the SPE thread’s problem state area. Direct access to problem state is a real-time feature, and may only be available to programs running with privileged authority (or in Linux, to processes with access to CAP_RAW_IO; see capget(2) for details). SPE_USER_REGS Specifies that the SPE setup registers r3, r4, and r5 are initialized with the 48 bytes pointed to by argp. Cell Programming Workshop 11/14/2018
14
Large single-SPE programming models
PPE controller maps system memory for SPE DMA trans. Data or code working set cannot fit completely into a local store The PPE controlling process, kernel, and libspe runtime set up the system memory mapping as SPE’s secondary memory store The SPE program accesses the secondary memory store via its software-controlled SPE DMA engine - Memory Flow Controller (MFC) SPE Program Local Store DMA transactions System Memory Cell Programming Workshop 11/14/2018
15
Large single-SPE programming models – I/O data
System memory for large size input / output data e.g. Streaming model System memory int g_ip[512*1024] Local store DMA int ip[32] SPE program: op = func(ip) int g_op[512*1024] DMA int op[32] Cell Programming Workshop 11/14/2018
16
Large single-SPE programming models
System memory as secondary memory store Manual management of data buffers Automatic software-managed data cache Software cache framework libraries Compiler runtime support System memory Local store Global objects SW cache entries SPE program Cell Programming Workshop 11/14/2018
17
Software Cache Low Level API Depend on cache type
Gives programmer direct control Look up Branch to miss handler Wait for DMA completion Custom interfaces Multiple lookups Special data types Cache locking #include <spe_cache.h> unsigned int __spe_cache_rd(unsigned int ea) { unsigned int ea_aligned = (ea) & ~SPE_CACHELINE_MASK; int set, line, byte, missing; unsigned int ret; missing = _spe_cache_dmap_lookup_(ea_aligned, set); line = _spe_cacheline_num_(set); byte = _spe_cacheline_byte_offset_(ea); ret = *((unsigned int *) &spe_cache_mem[line + byte]); if (unlikely(missing)) { _spe_cache_miss_(ea_aligned, set, 0, 1); spu_writech(22, SPE_CACHE_SET_TAGMASK(set)); spu_mfcstat(MFC_TAG_UPDATE_ALL); ret = *((unsigned int *) &spe_cache_mem[line + byte]); } return ret; } Cell Programming Workshop 11/14/2018
18
High Level API’s Common Operations Cached data read, write Pre-touch
Flush Invalidate etc. Simplify programming Hide details of DMA #include <spe_cache.h> #define LOAD1(addr) \ * ((char *) spe_cache_rd(addr)) #define STORE1(addr, c) \ * ((char *) spe_cache_wr(addr)) = c void memcpy_ea(uint dst, uint src, uint size) { while (size > 0) { char c = LOAD1(src); STORE1(dst, c); size--; src++; dst++; } Cell Programming Workshop 11/14/2018
19
Large single-SPE programming models
An overlay is SPU code that is dynamically loaded and executed by a running SPU program. It cannot be independently loaded or run on an SPE System memory as secondary memory store Manual loading of plug-in into code buffer Plug-in framework libraries Automatic and manual software-managed code overlay Compiler and Linker generated overlaying code System memory Local store SPE func a Overlay region 2 SPE func b When code does not fit in an SPE’s local store, overlays can be useful. An overlay is SPU code that is dynamically loaded and executed by a running SPU program. It cannot be independently loaded or run on an SPE. SPE Plugins allow the programmer to manage SPU code in a modular fashion. The specific SPU code that is needed at runtime is dynamically loaded. This differs from other SPE programming models in that the required code cannot be known ahead of time. The SPE Plugin uses the stack of the running SPU program, and it cannot make global external references. The SPE Plugin cannot communicate with the running SPU program other than through parameters passed in and out. SPE func c SPE func b or c Overlay region 1 Call SPE func d SPE func a, d or e Call SPE func e Non-overlay region SPE func main & f SPE func f SPE func main Cell Programming Workshop 11/14/2018
20
Large single-SPE programming models
Compile files without change Link with linker script specifying manual overlay structure. Anything not specified is in the non-overlay region OVERLAY : { .segment1 {a.o(.text)} .segment2 {d.o(.text)} .segment2 {e.o(.text)} } .segment1 {b.o(.text)} .segment2 {c.o(.text)} Linker automatically includes overlay manager and call stubs to invoke it You can optionally provide your own overlay manager. Source file(s) Overlay region 1 .c file(s) SPU Compiler Overlay region 2 .o file(s) Linker script SPU Linker overlay manager executable Executable w/ overlays Cell Programming Workshop 11/14/2018
21
Large single-SPE programming models – Job Queue
Code and data packaged together as inputs to an SPE kernel program A multi-tasking model – more discussion later Job queue System memory Local store code/data n code/data n+1 code/data n+2 code/data … Code n Data n SPE kernel DMA Cell Programming Workshop 11/14/2018
22
Large single-SPE programming models - DMA
DMA latency handling is critical to overall performance for SPE programs moving large data or code Data pre-fetching is a key technique to hide DMA latency e.g. double-buffering I Buf 1 (n) O Buf 1 (n) SPE exec. SPE program: Func (n) DMAs I Buf 2 (n+1) O Buf 2 (n-1) DMAs outputn-2 inputn Outputn-1 Inputn+1 outputn Inputn+2 SPE exec. Func (inputn-1) Func (inputn) Func (inputn+1) Time Cell Programming Workshop 11/14/2018
23
Large single-SPE programming models - CESOF
Cell Embedded SPE Object Format (CESOF) and PPE/SPE toolchains support the resolution of SPE references to the global system memory objects in the effective-address space. Effective Address Space Local Store Space CESOF EAR symbol resolution _EAR_g_foo structure Char g_foo[512] Char local_foo[512] DMA transactions Cell Programming Workshop 11/14/2018
24
Parallel programming models
Cell Programming Workshop 11/14/2018
25
Parallel programming models
Traditional parallel programming models applicable Based on interacting single-SPE programs Parallel SPE program synchronization mechanism Cache line-based MFC atomic update commands SPE input and output mailboxes with PPE SPE signal notification / register SPE events and interrupts SPE busy poll of shared memory location The PPE acts as a control and system service facility. Multiple SPEs work in parallel. The work is partitioned manually by the programmer, or automatically by the compilers. The SPEs must efficiently schedule MFC DMA commands that move instructions and data. This model either uses shared memory to communicate among SPEs, or it uses a message-passing model. Cache line cmds - similar to the PowerPC lwarx, ldarx, stwcx, and stdcx instructions Cell Programming Workshop 11/14/2018
26
Parallel programming models – Shared Memory
CBE can be programmed as a shared-memory multiprocessor PPE and SPE have different instruction sets and compilers Access data by address Random access in nature CESOF support for shared effective-address variables With proper locking mechanism, large SPE programs may access shared memory objects located in the effective-address space Compiler OpenMP support The Cell Broadband Engine can be programmed as a shared-memory multiprocessor, using two different instruction sets. The SPEs and the PPE fully interoperate in a cache-coherent Shared-Memory Multiprocessor Model. All DMA operations in the SPEs are cache-coherent. Shared memory load instructions are replaced by DMA operations from shared memory to local store (LS), followed by a load from LS to the register file. The DMA operations use an effective address that is common to the PPE and all the SPEs. Shared-memory store instructions are replaced by a store from the register file to the LS, followed by a DMA operation from LS to shared memory. The SPE’s DMA lock-line commands provide the equivalent of the PowerPC Architecture atomic update primitives (load with reservation and store conditional). A compiler or interpreter could manage part of the LS as a local cache for instructions and data obtained from shared memory. Cell Programming Workshop 11/14/2018
27
Parallel programming models – Shared Memory
SPEs and the PPE fully inter-operate in a cache-coherent model Cache-coherent DMA operations for SPEs DMA operations use effective address common to all PPE and SPEs SPE shared-memory store instructions are replaced A store from the register file to the LS DMA operation from LS to shared memory SPE shared-memory load instructions are replaced DMA operation from shared memory to LS A load from LS to register file A compiler could manage part of the LS as a local cache for instructions and data obtained from shared memory. Can view Cell as a shared memory multiprocessor environment. The PPE and SPE can interoperate in a cache-coherent model SPE DMA operations are cache coherent DMA operations can use an effective address that is common to both PPE and SPEs A shared memory store instruction can be viewed as a store from SPE register file to LS followed by DMA to system memory Finally, we could create a SW cache in SPE Local store to cache instructions/data from system memory Cell Programming Workshop 11/14/2018
28
Parallel programming models – Message Passing
Access data by connection Sequential in nature Applicable to SPE programs where addressable data space only spans over local store The message connection is still built on top of the shared memory model Compared with software-cache shared memory model More efficient runtime is possible, no address info handling overhead once connected LS to LS DMA optimized for data streaming through pipeline model Cell Programming Workshop 11/14/2018
29
Parallel programming models – Streaming
SPE initiated DMA Large array of data fed through a group of SPE programs A special case of job queue with regular data Each SPE program locks on the shared job queue to obtain next job For uneven jobs, workloads are self-balanced among available SPEs PPE SPE1 Kernel() SPE0 SPE7 System Memory In . I7 I6 I5 I4 I3 I2 I1 I0 On O7 O6 O5 O4 O3 O2 O1 O0 ….. For example, the Euler particle-system simulation in Section on page 102 implements the streaming model. In the Streaming Model, each SPE, in either a serial or parallel pipeline, computes data that streams through. The PPE acts as a stream controller, and the SPEs act as stream-data processors. For the SPEs, on-chip load and store bandwidth exceeds off-chip DMA-transfer bandwidth by an order of magnitude. The PPE and SPEs support message-passing between the PPE, the processing SPE, and other SPEs. Data-parallel Cell Programming Workshop 11/14/2018
30
Parallel programming models – Pipeline
PPE SPE1 Kernel1() SPE0 Kernel0() SPE7 Kernel7() System Memory In . I6 I5 I4 I3 I2 I1 I0 On O6 O5 O4 O3 O2 O1 O0 ….. DMA Use LS to LS DMA bandwidth, not system memory bandwidth Flexibility in connecting pipeline functions Larger collective code size per pipeline Load-balance is harder Task-parallel Cell Programming Workshop 11/14/2018
31
Multi-tasking SPEs Model
Cell Programming Workshop 11/14/2018
32
Multi-tasking SPEs – LS resident multi-tasking
Event Queue Simplest multi-tasking programming model No memory protection among tasks Co-operative, Non-preemptive, event-driven scheduling (FIFO run-to-completion models, or lightweight cooperatively-yielding models) a c d x Task a Task b Task c Task d Event Dispatcher Suitable for event driven applications A single SPE can run only one thread at a time; it cannot support multiple simultaneous threads Task x Local Store SPE n Cell Programming Workshop 11/14/2018
33
Multi-tasking SPEs – Self-managed multi-tasking
System memory Non-LS resident Blocked job context is swapped out of LS and scheduled back later to the job queue once unblocked task queue Job queue Task … Local store task n Code n task n+1 Data n task n+2 task n’ SPE kernel Cell Programming Workshop 11/14/2018
34
Multi-tasking SPEs – Kernel managed
Kernel-level SPE management model SPE as a device resource SPE as a heterogeneous processor SPE resource represented as a file system SPE scheduling and virtualization Maps running threads over a physical SPE or a group of SPEs More concurrent logical SPE tasks than the number of physical SPEs High context save/restore overhead favors run-to-completion scheduling policy Supports pre-emptive scheduling when needed Supports memory protection Cell Programming Workshop 11/14/2018
35
Programming Model Final Points
A proper programming model reduces development cost while achieving higher performance Programming frameworks and abstractions help with productivity Mixing programming models are common practice New models may be developed for particular applications. With the vast computational capacity, it is not hard to achieve a performance gain from an existing legacy base Top performance is harder Tools are critical in improving programmer productivity Cell Programming Workshop 11/14/2018
36
Special Notices -- Trademarks
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area. In no event will IBM be liable for damages arising directly or indirectly from any use of the information contained in this document. Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY USA. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied. All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions. IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice. IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies. All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. Many of the features described in this document are operating system dependent and may not be available on Linux. For more information, please check: Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment. Revised January 19, 2006 Cell Programming Workshop 11/14/2018
37
Special Notices (Cont.) -- Trademarks
The following terms are trademarks of International Business Machines Corporation in the United States and/or other countries: alphaWorks, BladeCenter, Blue Gene, ClusterProven, developerWorks, e business(logo), e(logo)business, e(logo)server, IBM, IBM(logo), ibm.com, IBM Business Partner (logo), IntelliStation, MediaStreamer, Micro Channel, NUMA-Q, PartnerWorld, PowerPC, PowerPC(logo), pSeries, TotalStorage, xSeries; Advanced Micro-Partitioning, eServer, Micro-Partitioning, NUMACenter, On Demand Business logo, OpenPower, POWER, Power Architecture, Power Everywhere, Power Family, Power PC, PowerPC Architecture, POWER5, POWER5+, POWER6, POWER6+, Redbooks, System p, System p5, System Storage, VideoCharger, Virtualization Engine. A full list of U.S. trademarks owned by IBM may be found at: Cell Broadband Engine and Cell Broadband Engine Architecture are trademarks of Sony Computer Entertainment, Inc. in the United States, other countries, or both. Rambus is a registered trademark of Rambus, Inc. XDR and FlexIO are trademarks of Rambus, Inc. UNIX is a registered trademark in the United States, other countries or both. Linux is a trademark of Linus Torvalds in the United States, other countries or both. Fedora is a trademark of Redhat, Inc. Microsoft, Windows, Windows NT and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries or both. Intel, Intel Xeon, Itanium and Pentium are trademarks or registered trademarks of Intel Corporation in the United States and/or other countries. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States and/or other countries. TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC). SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC). AltiVec is a trademark of Freescale Semiconductor, Inc. PCI-X and PCI Express are registered trademarks of PCI SIG. InfiniBand™ is a trademark the InfiniBand® Trade Association Other company, product and service names may be trademarks or service marks of others. Revised July 23, 2006 Cell Programming Workshop 11/14/2018
38
Special Notices - Copyrights
(c) Copyright International Business Machines Corporation 2005. All Rights Reserved. Printed in the United Sates September 2005. The following are trademarks of International Business Machines Corporation in the United States, or other countries, or both. IBM IBM Logo Power Architecture Other company, product and service names may be trademarks or service marks of others. All information contained in this document is subject to change without notice. The products described in this document are NOT intended for use in applications such as implantation, life support, or other hazardous uses where malfunction could result in death, bodily injury, or catastrophic property damage. The information contained in this document does not affect or change IBM product specifications or warranties. Nothing in this document shall operate as an express or implied license or indemnity under the intellectual property rights of IBM or third parties. All information contained in this document was obtained in specific environments, and is presented as an illustration. The results obtained in other operating environments may vary. While the information contained herein is believed to be accurate, such information is preliminary, and should not be relied upon for accuracy or completeness, and no representations or warranties of accuracy or completeness are made. THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN "AS IS" BASIS. In no event will IBM be liable for damages arising directly or indirectly from any use of the information contained in this document. IBM Microelectronics Division The IBM home page is 1580 Route 52, Bldg The IBM Microelectronics Division home page is Hopewell Junction, NY Cell Programming Workshop 11/14/2018
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.