Presentation is loading. Please wait.

Presentation is loading. Please wait.

Imagine a world where… Learner centric / technology enabled = System flexibility, efficiency, interoperability, JIT access, blending of training/learning/ops.

Similar presentations


Presentation on theme: "Imagine a world where… Learner centric / technology enabled = System flexibility, efficiency, interoperability, JIT access, blending of training/learning/ops."— Presentation transcript:

1 Imagine a world where… Learner centric / technology enabled = System flexibility, efficiency, interoperability, JIT access, blending of training/learning/ops Data driven = Adaptive system tailors training experiences to what, where, when, and how you need them; measuring performance meaningfully Learning science = Learning is driven by best practices, monitored and continuously improved Social Learning = Technology enables action from self, teachers/trainers/commanders, and peers (social learning) Learning Organizations = Organizations learn (don’t just capture) lessons and disseminate them effectively Learning system is guided by evidence-based best practices and continuously improved

2 Imagine a world where… Learner centric / technology enabled = System flexibility, efficiency, interoperability, JIT access, blending of training/learning/ops Data driven = Adaptive system tailors training experiences to what, where, when, and how you need them; measuring performance meaningfully Learning science = Learning is driven by best practices, monitored and continuously improved Social Learning = Technology enables action from self, teachers/trainers/commanders, and peers (social learning) Learning Organizations = Organizations learn (don’t just capture) lessons and disseminate them effectively

3 Organizations learn lessons and disseminate them effectively
Imagine a world where… Learner centric / technology enabled = System flexibility, efficiency, interoperability, JIT access, blending of training/learning/ops Data driven = Adaptive system tailors training experiences to what, where, when, and how you need them; measuring performance meaningfully Learning science = Learning is driven by best practices, monitored and continuously improved Social Learning = Technology enables action from self, teachers/trainers/commanders, and peers (social learning) Learning Organizations = Organizations learn (don’t just capture) lessons and disseminate them effectively Organizations learn lessons and disseminate them effectively

4 This paper defined five enabling conditions of a future military learning environment that reliably produces savvy and operationally adept individuals across all echelons, promotes a culture of organizational learning, and expands the breadth, depth, and agility of our Human Dimension. Admittedly, it’s a big idea. By painting this high-level picture of the “art of the possible” we hope to promote a conversation about a collective strategy for the future of military learning. As constituents of the military learning enterprise, if we work in isolation and pursue diverse projects that individually achieve limited short-term goals, then we might arrive at the desired emergent outcome (after considerable investment). If we work towards a shared vision, however, we can achieve success with more surety and efficiency. This means designing the entire learning system with the strategic outcome in mind, optimizing the whole system (versus trying to optimize individual, siloed parts of it), and considering the human element throughout that design effort. We need to work in concert towards a shared vision—a grand strategy—and with a high level of coordination among agencies, industry, and research centers. The building blocks of the five conditions outlined above already exist; yet, no one has operationalized, integrated, or collectively implemented them into real military learning environments. Individual projects and other examples showcase the possibilities of each concept described above. They are like the raw materials needed to build a house, and the future military learning strategy (which this paper contributes to) is the blueprint for the building. We still need to put the pieces together, which is no small task. More work is needed. We have reached critical mass in terms of understanding and demand for the future learning capability. The timing is right to unleash the full potential of our Human Dimension. All the resources are here—science, technology, and the demand—all we need is a shared strategy and the will to pursue it.

5 ADL History OUSD(P&R) Established via Executive Order in 1999
Deputy Asst. Secretary of Defense (Force Education & Training) Established via Executive Order in 1999 To conduct R&D on learning science and technology To improve learning effectiveness and efficiency across government LEADERSHIP TRANSITION Provide customer support to facilitate implementation INNOVATION Show the “art of the possible” via applied R&D The Advanced Distributed Learning (ADL) Initiative reports to the Office of the Under Secretary of Defense for Personnel and Readiness (OUSD(P&R)) ADL was established by Presidential Executive Order in 1999 to explore how federal training programs, initiatives, and policies can better support lifelong learning through the use of learning technology to conduct R & D to provide learning standards, specifications, and applications, which can be sustained and extended to incorporate new technologies and learning science as they emerge. From that Order: “A coordinated Federal effort is needed to provide flexible training opportunities to employees and to explore how Federal training programs, initiatives, and policies can better support lifelong learning through the use of learning technology.... The policy should promote and integrate the effective use of training technologies to create affordable and convenient training opportunities to improve Federal employee performance.” (Executive Order 13111, President Clinton, January 12,1999) ADL primarily conducts 6.3 (Advanced Technology Development), and the goal of this work is to improve learning effectiveness and efficiency for the DoD, whole of government, and beyond. Three large categories of ADL’s work include: (1) Serve as a leader for Learning Science and Tech topics within the DoD, other government agencies, and the greater professional community; (2) Develop the next generation of Learning Science techniques and technologies via research, development, and collaboration; and (3) Make Learning Science techniques and technologies easier for the DoD and other agencies to effectively and affordably implement Sae Schatz, Director Help craft the future vision of learning science and tech

6 6.3 = Adv. Technology Development R&D Program
1999 WHY: Efficiently enhance human performance via distributed learning, and in turn increase readiness and mission effectiveness by Executive Order Established? 6.3 = Adv. Technology Development R&D Program What’s the (human-readable) mission? What’s ADL? HOW: Advanced R&D on distributed learning science and technologies Program Element: SE Top Stakeholders? (1) US DoD / Security (2) Whole of Gov’t (3) Coalition Partners What does ADL do? Thought Leadership: Help craft the vision for future learning science and technology Transition: Help bridge the research-practice gap Innovation: Mature learning ideas and technologies Emerging Concepts Exploration Internal R&D Engage the Community Design-Based Research Requirements Engineering BAA Research Portfolio Visioning and Dissemination SCORM Via Active Outreach to DoD / Gov’t Including e.g. Defense ADL Advisory Committee (DADLAC) Collaborative Research e-Learning Learning Theory ADL Partnership Network Defense ADL Advisory Committee (DADLAC) Coalition: NATO, TTCP, PfPC ADL Communities of Practice Emphasis on Open-Source m-Learning and support VR and Simulations Policy and Specifications TLA infrastructure Performance Capture, Modeling, and Analysis e.g. DODI xAPI STANAG 2591 Learner Modeling e.g. Competencies and Credentialing Persistent and Open Models Visualizations

7 Coalition Defense Partners DoD and Security Sector
ADL Customers and Stakeholders Industry Scholarly Research Community Coalition Defense Partners Whole of Government DoD and Security Sector

8 http://www. globalsecurity
E. Performance Metrics In FY2015, ADL will: Deliver the next version of the Experience API, which is the first component of the new Training & Learning Architecture. Continue research and integration of the PAL application of Project Aristotle. Continue to influence key Service and international ADL meetings and conferences reference the discovery, sharing and delivery of interoperable training content; Promote the sharing of data among DoD, other Federal Agencies and state and local education departments throughout the U.S., by making educational resources discoverable and retrievable and also through the open source initiative. Prototype an Intelligent Tutor with the intent to determine the utilization of this technology across DoD and as a step toward the more comprehensive PAL. Metrics include, but are not limited to; Scalability, Generalizability, and Affordability.

9 Program Element Each PE represents a mutually exclusive program in the Future Year Defense Program (FYDP) portfolio PE SE DoD Program Code e.g., Central Supply & Maintenance, Strategic Forces, Guard & Reserve Forces 06 = Research & Development R&D Category e.g., Basic or Applied Research, RDT&E management 03 = Advance Technology Development Equipment/Activity Type e.g., Aircraft & Related Equipment, Ordnance, Ships 7 = Other Equip Serial Number Service e.g., Army, Navy, DARPA, OSD SE = DoD Human Resources Activity

10 Advanced Technology Development
ADL R&D Portfolio Budget Activity 6.3: Advanced Technology Development TECHNOLOGY READINESS LEVELS TRL1 TRL2 TRL3 TRL4 TRL5 TRL6 TRL7 TRL8 TRL9 Basic/applied research and feasibility studies Technology development and demonstration System development, test, launch, operations Component and/or breadboard validation in a laboratory Basic technological components are integrated to establish that they will work together; “low fidelity” TRL 4 Component and/or breadboard validation in a relevant setting Basic technological components and their supporting elements are tested in a realistic simulated environment; “high fidelity” TRL 5 Prototype demonstration in a relevant environment Prototype system, beyond that of TRL 5, is tested in a relevant environment to show the technology’s readiness TRL 6 About ADL funding and goals CITES: Research funding is defined and cataloged for all budgets with the following the following references: 6.1 BA 1 Basic Research 6.2 BA 2 Applied Research 6.3 BA 3 Advanced Technology Development 6.4 BA 4 Advanced Component Development and Prototypes 6.5 BA 5 System Development and Demonstration 6.6 BA 6 RDT&E Management Support — BA 7 Operational System Development OSD: Interpreting the top down guidance of the Executive Branch, OSD is continuing to map DOD specific S&T Investments against the following key topic areas established in 2012 as galvanizing S&T roadmaps from which the individual services apply resources and match against joint resources maintained in OSD level programs/interactivities. S&T Emphasis Area Roadmaps: •Autonomy •Cyber •Countering Weapons of Mass Destruction •Data-to-Decisions •Engineered Resilient Systems •Electronic Warfare / Electronic Protection •Human Systems In addition to the Area Roadmaps, OSD has identified 6 priority topics for Basic Research. Identification of these priorities is based on inputs from a variety of sources which includes university inputs. Providing inputs to DOD on these and other topics is a means to shape DOD research and funding emphasis. Many successful programs have evolved or have been created in DOD based on preliminary information coming from publications and university engagements. TRLs 3. Analytical and experimental critical function and/or characteristic proof of concept Active R&D is initiated. This includes analytical studies and laboratory studies to physically validate the analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Results of laboratory tests performed to measure parameters of interest and comparison to analytical predictions for critical subsystems. References to who, where, and when these tests and comparisons were performed. 4. Component and/or breadboard validation in laboratory environment Basic technological components are integrated to establish that they will work together. This is relatively “low fidelity” compared with the eventual system. Examples include integration of “ad hoc” hardware in the laboratory. System concepts that have been considered and results from testing laboratory-scale breadboard(s). References to who did this work and when. Provide an estimate of how breadboard hardware and test results differ from the expected system goals. 5. Component and/or breadboard validation in relevant environment Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so they can be tested in a simulated environment. Examples include “high-fidelity” laboratory integration of components. Results from testing laboratory breadboard system are integrated with other supporting elements in a simulated operational environment. How does the “relevant environment” differ from the expected operational environment? How do the test results compare with expectations? What problems, if any, were encountered? Was the breadboard system refined to more nearly match the expected system goals? 6. System/subsystem model or prototype demonstration in a relevant environment Representative model or prototype system, which is well beyond that of TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a highfidelity laboratory environment or in a simulated operational environment. Results from laboratory testing of a prototype system that is near the desired con-figuration in terms of performance, weight, and volume. How did the test environment differ from the operational environment? Who performed the tests? How did the test compare with expectations? What problems, if any, were encountered? What are/were the plans, options, or actions to resolve problems before moving to the next level? 7. System prototype demonstration in an operational environment. Prototype near or at planned operational system. Represents a major step up from TRL 6 by requiring demonstration of an actual system prototype in an operational environment (e.g., in an air-craft, in a vehicle, or in space). Results from testing a prototype system in an operational environment. Who performed the tests? How did the test compare with expectations? What problems, if any, were encountered? What are/were the plans, options, or actions to resolve problems EXAMPLES

11 ADL Research Areas Learning Theory m-Learning and support e-Learning
m-Learning Delivery 21st Century Competencies Implicit / Perceptual Learning m-Learning Andragogy ePub Transmedia Learning Learning Theory Adaptive Micro-Learning Learning Experience Design m-Learning and support Augmented Reality Social Learning Pedagogy/Andragogy Industrial Knowledge Design ADL Research Areas Learning Management Systems ISD Gamification MOOCs Adaptive Learning SCORM e-Learning VR and Simulations Learner Performance Tracking, Modeling, and Analysis  1. A Critical Review of the New DoDI on Learning Content Management 2. Competency and Skills System (CASS) 3. DODI Policy Analysis 4. iHATS 5. Independent Assessment of MathCraft 6. LM True Game (Mars Game) 7. MathCraft Scaling and Assessment Facilitation 8. Military Micro-Credentials (MIL CRED) 9. Open Social Learner Modeling (OSLM) and Adaptive Navigation 10. Perceptive and Adaptive Learning Modules (PALMS) 11. Pervasive Learning System (PERLS) 12. Review and Assessment of Personnel Competencies and Job Description Models and Methods 13. Semantically Enabled Automated Assessment in Virtual Environments (SAVE) 14. Virtual World Sandbox (VWS) Community of Practice and 2) here is a list of expected BAA topic areas: 1. Learning analytics and associated visualizations (3 requested WPs) 2. Writing Narrative Use-Cases for Emerging Technologies on Learning (4 requested WPs) 3. Actionable Data Book prototype (2 requested WPs) 4. Learning experience design for the TLA (5 requested WPs) 5. Other (5 requested WPs) TLA infrastructure Learning Analytics Learner Modeling Learning Visualizations Learning Service-Oriented Architecture Data Handling Persistent Learner Profiles Content Brokering / Meta-Adaptation xAPI Open learner models Competencies and Credentialing

12 2016 ADL PLANNED ACCOMPLISHMENTS
InK’D Author the roadmap of the future LS&T whitepaper LS&T requirements documentation (STOs) and corresponding visualization Emerging Concepts Showcase and Mobile Demo Kits Establish transition agreements for a significant number of BAA projects Introduce a new discipline: Industrial Knowledge Design Conduct site visits to share products and collect information (clearing house) Expand Partnership Network (and show tangible results in it) We intend to accomplish a lot in 2016 and plan to have these 10 concepts either accomplished or in place by July 2016. We can’t do this alone! Our partners are critical to our success—and the DADLAC is key. Partners include the DADLAC, ADL Partnership Network, and other Gov’t R&D labs TLA Demo Complete research design and functional architecture New DoDI is officially signed Establish and monitor KPIs, and share with partner gov’t labs

13 Sample Science and Technology Efforts
(more examples available upon request)

14 VR, AR, Simulation and Games
Virtual World Sandbox Performer: ADL internal R&D team and Lockheed Martin Why: Enable collaborative authoring and play of simulation via a web-browser with no plug-ins or software install. Make the software open-source. Details: The VW Sandbox was developed as an open-source, web-based game development tool. Authors can collaboratively (a la Google docs) design 2D and 3D-based simulations, incorporate their own behavioral scripts, and make use of the built-in sharable asset libraries. The software is cloud-based, and it requires no local software installation—not even browser plug-ins. The VW Sandbox is built on HTML5; so, anyone can play or author with any device (e.g., smart phone) that can access a web browser. The system supports single and multiplayer. The VW Sandbox is free and open source, with absolutely no licensing fees—thus, eliminating vendor lock-in. The code is available to use, reuse, or repurpose via GitHub. The VW Sandbox uses JavaScript as its native scripting language, and it ties-in to ADL’s 3D Repository of content (e.g., visual art assets, code, behaviors, sounds, and video clips) and/or by uploading content from your own database. Reuse is encouraged with the ability to build a custom library of commonly used content--eliminating wasted time searching for frequently used models/assets. The VW Sandbox also incorporates communication tools (e.g., voice and video, chat, and private messaging) that enable collaboration throughout the design and development of games and can be used once the game is deployed during training. The training scenarios made with the Sandbox can be replayed for after-action review, and they integrate seamlessly with Experience API (xAPI) for tracking performance.   ADL worked with Lockheed Martin to develop the original Virtual World Framework (i.e., the underlying browser-based communication protocol). From that Framework, ADL is developed the VW Sandbox. ADL continues to refine the system, and is also actively working to develop a self-sustaining community of practice around it. VR, AR, Simulation and Games Virtual World Sandbox (ADL, Lockheed Martin), Ongoing, Open-source simulations via web browser

15 Performer: Lockheed Martin
Mars Game Performer: Lockheed Martin Why: Demonstrate the flexibility and utility of the Virtual World Sandbox—and make a compelling, effective serious game in the process. Details: The Mars Game, a web-based game built using the Virtual World Sandbox, teaches K12 students about computer science (programming) concepts and associated mathematics. In the game, the Mars Rover has crash landed and the student must help it repair itself, build shelter, and prepare for colonists to arrive. The game is designed to engage high school students and grow their critical thinking, math, and programming skills while introducing them to possible career opportunities in the Science, Technology, Engineering, and Mathematics (STEM) fields. Status: Pilot studies to assess impact of game on players’ engagement and learning are in progress (as of Fall 2015). Instructional Design VR, AR, Simulation and Games Mars Game (Lockheed Martin), FY14-FY16, Web-based game for K12 Math

16 Performance Tracking/Analysis
Experience API (xAPI) Performer: ADL internal R&D team Why? Capture, store, access, and share micro-behavioral human performance (big) data across any situation and system. Details: The xAPI specification enables broad tracking of (big) data on learning experiences. It enables machine-readable communication (interoperability), storage, and access (retrieval) of the data. xAPI is open-source, built upon JSON (a fast, modern data-interchange format), and managed by ADL and active multidisciplinary community. Example data that may be captured include how many attempts a students makes on a test question, whether or not a learner watched the entire assigned video, and how long a worker took to complete a performance task. Broader data, including physiological monitoring, Internet of Things (IoT)-based inputs, and other informal learning experiences can also be captured – at both the learner and/or group levels. Learning designers can then use these data to inform decisions, create more personalized learning, or modify their assessments which can result in increased learner performance. The xAPI specification can augment almost any performance assessment situation (whether in formal education and training contexts, informal contexts, or on-the-job support). It is currently being incorporated into many Learning Management Systems, and companies such as Adobe, Articulate, and Hong Ding Educational Technology have begun using it. Performance Tracking/Analysis xAPI (ADL Internal Research), Ongoing, Open-source specification for big human-performance data

17 VR, AR, Simulation and Games
MathCraft Performer: CyCorp Why: Study the feasibility and effectiveness of “learning by teaching” in an AI-based serious game Objective: MathCraft is a 3D science-fiction adventure video game designed to reinforce 6th grade math and science concepts. Students are tasked to overcome an in-game challenge by helping the main character, an avatar named Elle, model math problems. The game is built on the artificial intelligence (AI) system, Cyc, which watches and learns from the student’s actions and makes changes to Elle’s mental models based on those actions. The AI system enables (human) students to “teach” Elle about the math, thereby helping themselves learn the concepts in the process. A dashboard monitors class and individual progress and performance which enable teachers to easily target parts of the curriculum that need emphasis. Teachers can use the dashboard to identify and target parts of the curriculum that need emphasis. Status: In 2014, schools in Texas piloted MathCraft and in Sept 2015 large-scale field-testing began with Department of Defense Education Activity (DoDEA). Instructional Design VR, AR, Simulation and Games MathCraft (CyCorp), FY12-FY16, Learn-by-Teaching AI-based serious game for 6th grade math

18 PERLS (SRI), FY13-FY17, Context-award adaptive mobile learning
PERvasive Learning Systems (PERLS) Performer: SRI Why: Develop the science and technology to enable adaptive, mobile-based informal learning OBJECTIVE: PERLS is a mobile application, currently in development. PERLS is capable of monitoring users’ GPS, interests, expertise, schedule, media preferences, and daily routines. It uses this information to suggest learning content and persuades the learner to consume that content based on their personal preferences and goals, time and location constraints, and level of expertise. PERLS includes an extensible platform for anytime/anywhere delivery of personalized learning content, and personal assistance capabilities that help turn users into pervasive learners. PERLS targets self-directed informal learning, i.e., learning the falls outside of formal course-centered instruction. Augmenting formal learning with these informal experience, is believed to increase retention and likelihood of progression to mastery. In addition to developing prototype technology, the PERLS project is also extending the theory of self-directed learning as well as optimum instructional design strategies for supporting it. The PERLS technology platform includes a mobile phone application for learners to find and select lesson content, learn, and receive recommendations and other guidance from their personal assistant. A web-based UI allows supports contribution, curation, and user management. The back end server manages sharable data about content, users, and learning contexts, and also makes personal assistance services available to other components. A key goal for the platform is to support extensibility to new forms of instruction in the form of modular Instruction Method Engines (IMEs), and new context awareness capabilities through Context Factor Modules (CFMs). Personal assistant services in PERLS includes persuasive recommendation of context-appropriate content, cueing of reflection and planning actions, synchronizing learning with individual schedule and preferences, and guiding the learner along learning “trajectories” that may span long periods and evolving interests. These and other personal assistance services are used in two ways – to support a unified across-lesson experience of assistance through the PERLS mobile application, and to support diverse forms of within-lesson assistance mediated by different IMEs. Mobile Learning Adaptive Learning Instructional Design PERLS (SRI), FY13-FY17, Context-award adaptive mobile learning

19 Perceptive and Adaptive Learning Modules (PALMS)
Perceptual and Adaptive Learning Modules (PALMS) Performer: Insight Learning Technologies Why: Create authoring tools and APIs to give the community access to the successful adaptive training technology created by Insight Learning Details: PALMs is a drill-and-practice style learning system that uses perceptual learning methods; these involves repeatedly exposing a learner to specific visual information, which supports implicit learning and Recognition Primed Decision Making. In addition, PALMS uses highly adaptive sequencing. A student’s level of mastery is determined based on accuracy and response time. Once a learner masters an item or subject area, it is removed from the flash card application. Insight Learning has demonstrated significant improvement with this system, but modules were previously developed by specialized instructional designers and computer scientists. Now, this project extends the work by developing a rapid authoring tool that facilitates the creation of learning content. Currently (as of Fall 2015), integration efforts with the PERvasive Learning System (PERLS) mobile application are underway. The team is also interested in collaboration efforts with organizations that have content which could be used or who are open to field testing the application and system. Although the underlying algorithms remain proprietary, the community can access them via APIs, and the PALMS authoring system is free for government use. Adaptive Learning Instructional Design PALMS (Insight Learning), FY13-FY16, Drill-and-practice adaptive training

20 Open Learner Models Instructional Design
Open Social Learner Modeling (OSLM) [ Performer: University of Pittsburgh (Peter Brusilovsky) Why: Explore the feasibility and utility of dynamic, open (i.e., personally viewable), individual + class/cohort models of learner performance, as used with “smart content” Details: OSLM employs a combination of two technologies: open social learner modeling and adaptive navigation support. These technologies can improve both learning performance and motivation by guiding learners to most relevant content items and activities and motivate them through visualization and social comparison. This effort is expanding the core idea from domain and type of content to many, and from an early prototype to a generic reusable set of tools that can be integrated in various component-based architectural solutions. The project includes an extended set of algorithms for open social student modeling and navigation support. In addition to studying the impact of these approaches, the OSLM project is delivering the open learner model interfaces as open-source software, authoring tools, and plus empirical evaluation results. Currently (as of Fall 2015), students at the University of Pittsburgh are using the system as part of their programming courses; initial results show a positive impact on their learning outcomes. Open Learner Models Instructional Design OSLM (University of Pittsburgh), FY13-FY15, Motivate students through social comparisons

21 Competencies + Credentialing
The MIL-CRED Project: A Micro-credential Design to Facilitate Transition of Military Personnel to Civilian Careers and Educational Opportunities Performer: Integrated Media technologies (IMT) Global (George Tamas) Why: Design a micro-credential schema and API that helps translate military skills into the civilian domain. Details: The intent of this project is to design, develop, and test a standardized micro-credential design that facilitates transition of military personnel to civilian careers and educational opportunities. A variety of initiatives have developed to explore the value of certain emerging standards for micro-credentialing that offer simplified collection and storage/distribution of evidence of learning for a wide variety of short-duration training, unique military skills, experience, and other competencies that do not fit neatly into formal academic models (course units, transcripts) or conventional resume formats. One shortfall of current micro-credentialing efforts is that for corporate purposes, these early models are often insufficiently robust and lack standards for valid classification and placement. MIL-CRED shall develop a sufficiently robust design that can serve as a new international micro-credential standard that will be “transferable” and generally accepted across military, corporate, and academic organizations with a particular emphasis on addressing the unique needs of the military. Deliverables: This effort will address two aspects of transition potential, namely: provide suitable technical interface mechanisms (such as APIs) to existing military, academic, and corporate systems and databases, and also provide recommendations for policy and procedure guidelines in both the sending and receiving organizations that deal with credentialing issues (issuance, search, assessment, placement, etc.). Competencies + Credentialing MIL-CRED (IMT), FY15-FY16, Micro-credential model facilitating transition of military personnel

22

23 Dr. Sae Schatz, Director, ADL Initiative


Download ppt "Imagine a world where… Learner centric / technology enabled = System flexibility, efficiency, interoperability, JIT access, blending of training/learning/ops."

Similar presentations


Ads by Google