{"id":33517,"date":"2024-06-14T09:37:21","date_gmt":"2024-06-14T13:37:21","guid":{"rendered":"https:\/\/research.ncsu.edu\/?page_id=33517"},"modified":"2024-09-27T09:02:27","modified_gmt":"2024-09-27T13:02:27","slug":"speakers","status":"publish","type":"page","link":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/","title":{"rendered":"Oak Ridge National Laboratory’s Core Universities AI Workshop"},"content":{"rendered":"\n\n\n\n
\n \n
\n \n Program<\/span>\n\t\t\t\n\t\t\t\n\t\t<\/svg><\/span>\n <\/a>\n <\/div>\n \n\n
\n \n Speakers<\/span>\n\t\t\t\n\t\t\t\n\t\t<\/svg><\/span>\n <\/a>\n <\/div>\n \n\n
\n \n Venue<\/span>\n\t\t\t\n\t\t\t\n\t\t<\/svg><\/span>\n <\/a>\n <\/div>\n \n\n
\n \n Registration<\/span>\n\t\t\t\n\t\t\t\n\t\t<\/svg><\/span>\n <\/a>\n <\/div>\n \n\n
\n \n About<\/span>\n\t\t\t\n\t\t\t\n\t\t<\/svg><\/span>\n <\/a>\n <\/div>\n \n\n <\/div>\n\n\n\n

Speakers<\/h2>\n\n\n\n

Keynote Speakers<\/strong><\/h3>\n\n\n
\n
\"\"<\/figure><\/div>\n\n\n

Mansoor Haider, Professor, Department of Mathematics, NC State University<\/h4>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Keynote: Unsupervised Learning Methods for Dual-Domain Geo-Clustering in Public Health Applications<\/h2>\n <\/a>\n
\n
\n \n <\/div>\n <\/div>\n <\/div>\n\n\n
\n \n \n <\/span>\n <\/span>\n

Bio<\/h2>\n <\/a>\n
\n
\n \n\n

I am a Professor in the NCSU Department of Mathematics<\/a> and the Biomathematics<\/a> graduate program. I also serve as Director of the Foundations in Data Science MS Program<\/a>.<\/p>\n\n\n\n

My research expertise is in applied and computational mathematics, with a focus on applications in the life sciences and public health. A longstanding theme in my research is mathematical modeling of biological soft tissues in the contexts of tissue biomechanics, mechanobiology, regenerative medicine and biomedical imaging. A more recent research theme is the development of tailored unsupervised matching learning algorithms for public health applications. Technical areas of expertise include continuum mechanics (nonlinear elasticity, viscoelasticity, multiphase mixtures), applied partial differential equations, numerical methods for PDEs (BEM, FEM, finite difference methods), methods for data-driven mathematical modeling, and algorithms for data clustering. My educational interests include best practices for inclusive graduate training in the mathematical sciences, distance learning, and effective use of technology in the classroom.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n\n\n

\n
\"\"<\/figure><\/div>\n\n\n

Forrest M. Hoffman, Distinguished Computational Earth System Scientist, Oak Ridge National Laboratory<\/h4>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Keynote: Exploiting Artificial Intelligence and Machine Learning for Advancing Earth System Prediction<\/h2>\n <\/a>\n
\n
\n \n\n

Abstract: Because of rapid technological advances in sensor development, computational capacity, and data storage density, the volume, velocity, complexity, and resolution of Earth science data are rapidly increasing. Data mining, machine learning (ML), and other statistical regression approaches, often referred to collectively as artificial intelligence (AI), offer the promise for improved prediction and mechanistic understanding of Earth system processes, and they provide a path for fusing data from multiple sources or platforms into data-driven and hybrid models composed of both process-based and deep learning components. Prospective opportunities for employing an AI framework to integrate a wealth of in situ measurements and remotely sensed observations will be presented. As an example, ML-based models of plant stomatal conductance and plant hydraulics can be developed to produce a hybrid process-based\/ML-based land model within a global Earth system model with the aim of reducing uncertainties in predictions of soil moisture, plant productivity, and carbon assimilation. Carefully designed hybrid models could improve accuracy of predictions and better inform choices of climate mitigation and adaptation strategies. A variety of environmental characterization, uncertainty quantification, and model prediction approaches will be described, and strategies for developing a new generation of ML methods on high performance computing platforms to advance Earth and environmental system science will be presented.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Bio<\/h2>\n <\/a>\n
\n
\n \n\n

Forrest M. Hoffman is a Distinguished Computational Earth System Scientist and the Group Leader for the Integrated Computational Earth Sciences Group at Oak Ridge National Laboratory (ORNL). He develops and applies Earth system models (ESMs) to investigate the global carbon cycle and feedbacks between biogeochemical cycles and the climate system. Forrest leads a project focused on community model benchmarking activities and the development of the International Land Model Benchmarking (ILAMB) and International Ocean Model Benchmarking (IOMB) packages. He is particularly interested in applying machine learning methods to explore the interactions of terrestrial and marine ecosystems with hydrology and climate. Forrest also leads development and deployment of a next generation Earth System Grid Federation (ESGF) distributed data infrastructure in the US. Forrest is also a Joint Faculty Member in the University of Tennessee\u2019s Department of Civil & Environmental Engineering in nearby Knoxville, Tennessee, a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE), and a Fellow of the American Association for the Advancement of Science (AAAS).<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n\n\n

\n
\"\"<\/figure><\/div>\n\n\n

Jian Pei, Arthur S. Pearse Distinguished Professor of Computer Science, Department of Electrical & Computer Engineering, Duke University<\/h4>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Keynote: Contribution Valuation for Collaborative AI<\/h2>\n <\/a>\n
\n
\n \n\n

<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Bio<\/h2>\n <\/a>\n
\n
\n \n\n

Jian Pei is the Arthur S. Pearse Distinguished Professor of Computer Science and Chair of the Department of Computer Science at Duke University. His research areas include data science, data mining, databases, information retrieval, computational statistics, applied machine learning and AI.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n\n\n\n

Speakers<\/h3>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Prasanna Date, Computer Science and Mathematics Division, Oak Ridge National Laboratory<\/h2>\n <\/a>\n
\n
\n \n\n

Dr. Prasanna Date is a Research Scientist in the Computer Science and Mathematics Division (CSMD) at the Oak Ridge National Laboratory (ORNL). His research interests include neuromorphic computing and quantum machine learning. Date was featured on the 2022 Forbes 30 Under 30 Asia list<\/a> in the Healthcare and Science category. His team also won the 2023 R&D 100 Award in the Software\/Services Category for developing the SuperNeuro simulator, which at the time of release, was the fastest simulator for neuromorphic computing in the world. At ORNL, Date was awarded the Promising Early-Career Researcher Award in CSMD in 2021 and also won the 2021 Your Science in a Nutshell (YSiaN) competition.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Yulia Gel, Department of Statistics, Virginia Tech<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: Topological Graph Contrastive Learning<\/p>\n\n\n\n

Abstract: Graph contrastive learning (GCL) has recently emerged as a new concept which allows for capitalizing on the  strengths of graph neural networks (GNNs) to learn rich representations in a wide variety of applications which involve abundant unlabeled information. However, existing GCL approaches largely tend to overlook the important latent information on higher-order graph substructures. We address this limitation by bringing the concepts of topological invariance and extended persistence on graphs to GCL. In particular, we propose a new contrastive mode which targets topological representations of the two augmented views from the same graph, yielded by extracting latent shape properties of the graph at multiple resolutions and summarized in a form of extended persistence landscapes (EPL). Our extensive numerical results on molecular and chemical compound datasets show that the new Topological Graph Contrastive Learning approach delivers significant performance gains in unsupervised graph classification and also exhibits robustness under noisy scenarios. This is a joint work with Yuzhou Chen, University of California Riverside and Jose Frias, UNAM.<\/p>\n\n\n\n

Bio: Yulia R. Gel is a Professor in the Department of Statistics at Virginia Tech and Program Director-Expert at the Division of Mathematical Sciences at the National Science Foundation. Her research interests include statistical foundation of Data Science, inference for random graphs and complex networks, time series analysis, and predictive analytics. She holds a Ph.D in Mathematics, followed by a postdoctoral position in Statistics at the University of Washington. Prior to joining Virginia Tech, she was a tenured faculty member at the University of Waterloo, Canada and University of Texas at Dallas. She also held visiting positions at Johns Hopkins University, University of California, Berkeley, and the Isaac Newton Institute for Mathematical Sciences, Cambridge University, UK. She is a Fellow of the American Statistical Association.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Xiuwen Liu, Department of Computer Science, Florida State University<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: Structures and Vulnerabilities of the Representation Space of Transformers<\/p>\n\n\n\n

Abstract: Pretrained large foundation models play a central role in the recent surge of artificial intelligence, resulting in finetuned models with remarkable abilities when measured on benchmark datasets, standard exams, and applications. Due to their inherent complexity, these models are poorly understood. While small adversarial inputs to such models are well known, the structures of the representation space are not well characterized despite their fundamental importance. In this talk, I will discuss the representation space of transformers and show their inherent vulnerabilities and limitations. Based on local directional Lipschitz constant estimation techniques, we propose an effective framework to characterize and explore the embedding spaces of deployed large models. More specifically, using the vision transformers as an example due to the continuous nature of their input space, we show via analyses and systematic experiments that the representation space consists of approximately piecewise linear subspaces where there exist very different inputs sharing the same representations, and at the same time, local normal spaces where there are visually indistinguishable inputs having very different representations. The empirical results are further verified using the local directional estimations of the Lipschitz constants of the underlying models. The work is done jointly with Shaeke Salman and Md Montasir Bin Shams.<\/p>\n\n\n\n

Bio: Xiuwen Liu is L3Harris professor in the Department of Computer Science at Florida State University (FSU), where he served as the chair from 2020 to 2023. He develops effective optimization, machine learning, and analysis techniques for problems and models that are high dimensional in nature. His current areas of research include machine learning for solving engineering problems, understanding machine learning and AI models and their vulnerabilities, physics-guided machine learning for quantum solutions, AI and cyber security education, and neuro-symbolic solutions via iterative integration of large language models and reasoning tools. He serves on the program committee of multiple international conferences and on the CAE (National Centers of Academic Excellence in Cybersecurity) CoP-CD Steering Committee.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Robert Patton, Data and AI Systems Section, Oak Ridge National Laboratory<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: The Future of Artificial Intelligence<\/p>\n\n\n\n

Abstract: The human brain stands as a model of exceptional capability and energy efficiency, setting the ultimate standard for artificial intelligence (AI). In the quest to emulate these attributes, AI development is likely to proceed along two primary trajectories. The first path involves the continuous expansion of neural network sizes, enabling AI systems to achieve greater cognitive capabilities and handle increasingly complex tasks. The second path emphasizes improving the energy efficiency of these networks, striving to mirror the brain’s ability to perform immense computations with minimal power consumption. As advancements are made in both areas, these paths are expected to converge, resulting in AI systems that not only match the brain’s sophisticated functionalities but also operate with comparable energy efficiency. This convergence will mark a significant milestone in AI, bringing us closer to creating machines that can think, learn, and adapt with the efficiency and effectiveness of the human brain, revolutionizing numerous fields and applications.<\/p>\n\n\n\n

Bio: Robert M. Patton is a Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory and the Group Lead for the Learning Systems Group within the Data and AI Systems Section. He has over 17 years of professional experience in government research and development as well as being the principal investigator (PI) of more than $2 million in research funds. His research focuses on artificial intelligence, evolutionary algorithms, and machine learning as they apply to data analysis, information processing, and prediction. His work has achieved more than 100 publications, 4 patents, 3 software copyrights, 3 R&D 100 Awards, and 3 nominations for the Association of Computing Machinery Gordon Bell Award.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Catherine (Katie) Schuman, Department of Electrical Engineering and Computer Science, University of Tennessee<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: Neuromorphic Computing for Real-World Applications<\/p>\n\n\n\n

Abstract: Neuromorphic computing offers the opportunity for low-power, intelligent systems.  However, effectively leveraging neuromorphic computers requires co-design of hardware, algorithms, and applications.  In this talk, I will review our recent work on hardware-application co-design in neuromorphic computing.  In particular, I will showcase our uses of neuromorphic computing for a variety of applications, including internal combustion engine control, radiation detection, and event-based camera processing.<\/p>\n\n\n\n

Bio: Catherine (Katie) Schuman is an Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee (UT). She received her Ph.D. in Computer Science from UT in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. Katie previously served as a research scientist at Oak Ridge National Laboratory, where her research focused on algorithms and applications of neuromorphic systems. Katie co-leads the TENNLab Neuromorphic Computing Research Group at UT. She has over 70 publications as well as seven patents in the field of neuromorphic computing. She received the Department of Energy Early Career Award in 2019. Katie is a senior member of the Association of Computing Machinery and the IEEE.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Nikos Sidiropoulos, Department of Electrical and Computer Engineering, University of Virginia<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: Tensors in Combinatorial Optimization and Error Control Decoding<\/p>\n\n\n\n

Abstract: We consider the problem of finding the smallest or largest entry of a tensor of order N that is specified via its rank decomposition. We show that this is NP-hard for any tensor rank higher than one, and polynomial-time solvable in the rank-one case. We also propose a continuous relaxation and prove that it is tight for any rank. For low-enough ranks, the proposed continuous reformulation is amenable to low-complexity gradient-based optimization, and we propose a suite of gradient-based optimization algorithms drawing from projected gradient descent, Frank-Wolfe, or explicit parametrization of the relaxed constraints. We also show that our core results remain valid no matter what kind of polyadic tensor model is used to represent the tensor of interest, including Tucker, HOSVD\/MLSVD, tensor train, or tensor ring. Next, we consider the class of problems that can be posed as special instances of the problem of interest. We show that this class includes the partition problem (and thus all NP-complete problems via polynomial-time transformation), integer least squares, integer linear programming, integer quadratic programming, sign retrieval (a special kind of mixed integer programming \/ restricted version of phase retrieval), and maximum likelihood decoding of parity check codes. We demonstrate promising experimental results on a number of hard problems, including state-of-art performance in decoding low density parity check codes and general parity check codes. <\/p>\n\n\n\n

Bio: Nicholas D. Sidiropoulos (Fellow, IEEE) received the Diploma in electrical engineering from Aristotle University of Thessaloniki, Thessaloniki, Greece, and the M.S. and Ph.D. degrees in electrical engineering from the University of Maryland at College Park, College Park, MD, USA, in 1988, 1990, and 1992, respectively. He is the Louis T. Rader Professor with the Department of ECE, University of Virginia. He has previously served as a Faculty with the University of Minnesota and the Technical University of Crete, Greece. His research interests are in signal processing, communications, optimization, tensor decomposition, and machine learning. He received the NSF\/CAREER award in 1998, IEEE Signal Processing Society (SPS) Best Paper Award in 2001, 2007, 2011, and 2022, and the IEEE SPS Donald G. Fink Overview Paper Award in 2022. He served as IEEE SPS Distinguished Lecturer (2008\u20132009), Vice President\u2014Membership of the IEEE SPS (2017\u20132019), and as Chair of the IEEE SPS Fellow Evaluation Committee (2020\u20132021). He received the 2010 IEEE SPS Meritorious Service Award, the 2013 Distinguished Alumni Award of the ECE Department, University of Maryland, the 2022 EURASIP Technical Achievement Award, and the 2022 IEEE SPS Claude Shannon\u2013Harry Nyquist Technical Achievement Award. He is a fellow of EURASIP (2014).<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Wendy K. Tam, Political Science, Computer Science, Law, and Biomedical Informatics, Vanderbilt University<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: Technology as Both a Threat and a Promise for Regulating Social Media Platforms<\/p>\n\n\n\n

Abstract: The Internet was \u201cborn\u201d in the early 1990s.  Since then, it has morphed in simultaneously amazing and concerning ways.  Moreover, while this online content was largely human-generated a few decades ago, the content consumed today is generated by both humans and machines, further driving the speed, increasing the amount, and changing the character of the information that is produced and consumed.  Along with the explosive content has arisen a need and desire for content moderation on the social media platforms.  We propose a legal theory consistent with constitutional principles for regulating social media platforms and present a large-scale computational model that both inspires our legal approach as well as provides the technical evidence for the scalability and viability of our approach.<\/p>\n\n\n\n

Bio: Wendy K. Tam is Professor of Political Science, Computer Science, Law, and Biomedical Informatics at Vanderbilt University, an affiliate at the National Center for Supercomputing Applications and Professor Emerita at the University of Illinois at Urbana-Champaign, and Professional Researcher in the School of Medicine at the University of California at San Francisco. Her general research interests are in the development of computational and statistical models across varied applications. <\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Richard Vuduc, School of Computational Science and Engineering, Georgia Tech<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: Are AI machines good for HPC?<\/p>\n\n\n\n

Abstract: Supercomputer architectures will be dominated by a single workload: AI training. Is that good for HPC? This talk speculates on the relative merits\u2014and pitfalls\u2014of “AI machines” for HPC workloads.<\/p>\n\n\n\n

Bio: Rich Vuduc is a professor at Georgia Tech in the School of Computational Science and Engineering. His research lab, the HPC Garage, is interested in performance “by any means necessary,” whether by more innovative algorithms, better analysis, more effective programming techniques, or novel hardware.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Emily Wenger, Department of Electrical and Computer Engineering, Duke University<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: AI for Cryptanalysis<\/p>\n\n\n\n

I research security and privacy issues related to machine learning models. In 2023-2024, I am working as a research scientist at Meta AI<\/a>, developing machine-learning based attacks on post-quantum cryptosystems. Starting in July 2024, I will be an assistant professor of Electrical and Computer Engineering at Duke University.<\/p>\n\n\n\n

In 2023, I graduated with my PhD from the University of Chicago, where I worked in the SAND Lab<\/a> and was advised by Ben Zhao<\/a> and Heather Zheng<\/a>. During my PhD, I received the GFSD<\/a>, Harvey<\/a>, and University of Chicago Neubauer<\/a> and Harper Dissertation<\/a> fellowships, as well as a Siebel Scholarship<\/a>. I was also named to the 2024 Forbes 30 under 30<\/a> list for my work on Glaze, a tool that protects artists’ work from unwanted use in generative AI models.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Chau-Wai Wong, Department of Electrical and Computer Engineering, NC State University<\/h2>\n <\/a>\n
\n
\n \n\n

Talk: From Promise to Vulnerability: Navigating Privacy, Attacks, and Defenses in Federated Learning<\/p>\n\n\n\n

Chau-Wai Wong received his B.Eng. degree with first-class honors in 2008, and an M.Phil. degree in 2010, both in electronic and information engineering from The Hong Kong Polytechnic University, and his Ph.D. degree in electrical engineering from the University of Maryland, College Park, in 2017. He is currently an Assistant Professor at the Department of Electrical and Computer Engineering, Forensic Sciences Cluster, and Secure Computing Institute, North Carolina State University.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Dawei Zhou, Department of Computer Science, Virginia Tech<\/h2>\n <\/a>\n
\n
\n \n\n

Title: Large Foundation Model Development and Adaptation for Metamaterial Design<\/p>\n\n\n\n

Abstract: Metamaterials, characterized by unique properties stemming from their designed structures rather than chemical compositions, have offered possibilities not attainable with traditional materials and emerged as a frontier in disruptive technologies across various domains, such as sensing, information technology, infrastructure, and transportation. However, the design of metamaterials heavily relies on human-centric concepts, expertise, and inspiration. This knowledge-intensive process poses a significant barrier for engineers and technicians designing metamaterials tailored to their requirements. In this talk, I will introduce METASCIENTIST, an autonomous computational system aiming to synthesize the knowledge related to metamaterial design and efficiently generate novel hypotheses at scale. In particular, I will hinge on the key metamaterial applications and discuss our recent work on long-tailed hypothesis generation and non-IID individual calibration. Finally, I will conclude this talk and share thoughts about my future research.<\/p>\n\n\n\n

Bio: Dawei Zhou is an Assistant Professor at the Computer Science Department of Virginia Tech and the director of the VirginiaTech Learning on Graphs (VLOG) Lab. Zhou\u2019s prior research on open-world machine learning, with applications in hypothesis generation and validation, financial fraud detection, cyber security, risk management, predictive maintenance,  and healthcare. He obtained his Ph.D. degree from the Computer Science Department of the University of Illinois Urbana-Champaign (UIUC). He has authored more than 40 publications in premier academic venues across AI, data mining, and information retrieval (e.g., AAAI, IJCAI, KDD, ICDM, SDM, TKDD, DMKD, WWW, CIKM) and has served as Vice Program Chair\/Proceeding Chair\/Local Chair\/ Session Chairs\/(Senior) Program Committee Members in various top ML and AI conferences (e.g., NeurIPS, ICML, KDD, WWW, SIGIR, ICLR, AAAI, IJCAI, BigData, etc.). His research is generously supported by Virginia Tech, NSF, DARPA, DHS, Commonwealth Cyber Initiative, 4VA, Deloitte, and Cisco. His work has been recognized by the 24th CNSF Capitol Hill Science Exhibition, Cisco Faculty Research Award (2023), AAAI New Faculty Highlights roster (2024), and NSF Career Award (2024).<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n\n\n\n

Panelists<\/h3>\n\n\n\n

Panel 1<\/h4>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Yiran Chen (Moderator<\/em>), Department of Electrical and Computer Engineering, Duke University<\/h2>\n <\/a>\n
\n
\n \n\n

Yiran Chen is the John Cocke Distinguished Professor of Electrical and Computer Engineering at Duke University. He is the director of the National Science Foundation (NSF) AI Institute for Edge Computing Leveraging Next-generation Networks (Athena). His research focuses on new memory and storage systems, machine learning and neuromorphic computing systems, and mobile computing. <\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n\n\n\n

Panel 2<\/h4>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Michael L. Parks (Moderator<\/em>), Oak Ridge National Laboratory<\/h2>\n <\/a>\n
\n
\n \n\n

I am the Director of the Computer Science and Mathematics Division in the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory. My current research interests include numerical analysis, multiscale modeling, scientific machine learning, nonlocal models and mathematics, numerical linear algebra, linear solvers, and multiscale modeling. <\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Srikanth Allu, Oak Ridge National Laboratory<\/h2>\n <\/a>\n
\n
\n \n\n

Dr. Allu is a computational research scientist specializing in the development of advanced multi-physics algorithms for energy storage applications, with extensive experience as a principal investigator on various computational science projects. He currently leads Rapid Operational Validation Initiative (ROVI) \u2014 a nationwide AI initiative focused on validating and testing the efficiency of long-duration energy storage systems.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Erin Barker, Pacific Northwest National Laboratory<\/h2>\n <\/a>\n
\n
\n \n\n

Erin Iesulauro Barker is a senior research scientist in the National Security Directorate (NSD) at Pacific Northwest National Laboratory (PNNL). She has 15 years of experience in computational modeling of the mechanical behavior of materials at multiple length scales using finite element analysis, developing computational tools for automatically generating digital material samples, and developing highly parallel solver frameworks.<\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Rachel Levy, Data Science Academy, NC State University<\/h2>\n <\/a>\n
\n
\n \n\n

Ray Levy is the inaugural leader of the Data Science Academy<\/a>. She incubates data science research partnerships within NC State, across NC and beyond, leads the design and implementation of the DSA\u2019s ADAPT course model (All-Campus Data Science Accessible Project Based Teaching and Learning), and communicates about data science with national and international audiences.<\/p>\n\n\n\n

Bio<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

David M. Reif, Division of Translational Toxicology, National Institute of Environmental Health Sciences<\/h2>\n <\/a>\n
\n
\n \n\n

David M. Reif, Ph.D., is Chief of the Predictive Toxicology Branch (PTB) in the Division of Translational Toxicology (DTT). In this role, he leverages the expertise of the branch in AI\/ML, data science, toxicogenomics, spatiotemporal exposures and toxicology, computational methods development, and new approach methods to advance predictive toxicology applications with partners across NIEHS, the interagency Tox21 Program and the Interagency Coordinating Committee on the Validation of Alternative Methods. Reif was previously a bioinformatics professor at NC State. <\/p>\n\n\n\n

Website<\/a><\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n\n\n\n

Posters<\/h2>\n\n\n
\n
\n \n
\n \n \n <\/span>\n <\/span>\n

Abdullah Al Arafat, NC State University<\/strong> – Secure Learning in Resource Scarce Applications<\/h2>\n <\/a>\n
\n
\n \n <\/div>\n <\/div>\n <\/div>\n\n\n
\n \n \n <\/span>\n <\/span>\n

Rick Archibald, Oak Ridge National Laboratory – AI\/ML in FASTMath<\/h2>\n <\/a>\n
\n
\n \n\n

Frameworks, Algorithms and Scalable Technologies for Mathematics (FASTMath, scidac5-fastmath.lbl.gov) Institute is the mathematical institute for the Scientific Discovery through Advanced Computing (SciDAC, www.scidac.gov) project. The FASTMath Institute develops and deploys scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborates with application scientists to ensure the usefulness and applicability of FASTMath technologies. This poster will present the FASTMath integrated AI\/ML activities across all a diverse set of mathematical and computational research areas, and demonstrate the advances made in scientific AI\/ML for SciDAC applications and DOE mission.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Wes Avett – Databricks University Alliance: Data & AI Platform for Teaching & Research Innovation<\/h2>\n <\/a>\n
\n
\n \n\n

Databricks University Alliance program has now partnered with over 1,000 academic institutions worldwide to integrate industry-leading data and AI technologies into research and teaching. We are actively seeking new faculty and institutions looking to enhance their curricula and research capabilities with our cutting-edge platform. Our program offers several key benefits: Prepares students with in-demand skills: Students gain hands-on experience with the same tools and technologies used by leading companies, making them job-ready upon graduation. Supports cutting-edge research: Researchers can focus on generating insights rather than managing complex infrastructure, with our easy-to-use platform for data management and analysis. Bridges academia and industry: We facilitate connections between universities and our enterprise customers, helping align academic programs with industry needs. Provides teaching resources at no cost: Participating institutions receive access to Databricks software, training materials, and technical support at no charge. Our vision is to empower the next generation of data professionals and accelerate innovation in data science and AI. We invite faculty to join us in shaping the future of data education and research. Bring your data, unlock your insights, and prepare your students for success with Databricks.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Wesley Brewer, Oak Ridge National Laboratory – Entropy-driven optimal sifting of training data<\/h2>\n <\/a>\n
\n
\n \n\n

Optimal sub-sampling of large datasets from fluid dynamics simulations is essential for training reduced-order machine learned models. A method using Shannon entropy was developed to weight flow features according to their level of information content, such that the most informative features can be extracted and used for training a surrogate model. The method is demonstrated in the canonical flow over a cylinder problem simulated with OpenFOAM. Both time-independent predictions and temporal forecasting were investigated as well as two types of prediction targets: local per-grid-point predictions and global per-time-step predictions. When tested on training a surrogate model, results indicate that our entropy-based sampling method typically outperforms random sampling and yields more reproducible results in less iterations. Finally, the method was used to train a surrogate model for modeling turbulence in magnetohydrodynamic flows, which revealed various challenges and opportunities for future research.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Aditya Devarakonda, Wake Forest University – Algorithms Design for Scalable Optimization<\/h2>\n <\/a>\n
\n
\n \n\n

This work will present algorithms design techniques for numerical optimization at scale on modern supercomputing environments. The newly designed algorithms trade extra computation and bandwidth in order to reduce the frequency of communication. Preliminary performance results on developing multi-dimensional optimization algorithms which further reduce communication will also be presented.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Dayton Kizzire, Oak Ridge National Laboratory – XsymNet: Combined Machine Learning and Exhaustive Symmetry Approach for\u00a0 Phase Transition Studies<\/strong><\/h2>\n <\/a>\n
\n
\n \n\n

Revealing the symmetry change across a phase transition is fundamentally important to understanding and controlling properties such as polarization (ferroelectric transition), conductivity (metal-insulator transition) and other unconventional properties including piezoelectricity, multiferroics, and superconductivity. The recently developed exhaustive symmetry search (ESS) technique has been proven to be an effective tool for systematically studying subtle and complex phase transitions. Here we present XsymNet, a combined machine learning and exhaustive symmetry search approach that has been developed to identify the subgroup symmetry of a material from powder diffraction. XsymNet is a convolutional neural network trained on simulated diffraction data from strained and distorted subgroup tree members generated by ISODISTORT. In this work we discuss the workflow of XsymNet, how it lowers the barrier for phase transition studies, and our future work towards automated diffraction analysis powered by ML.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Alexandros Kapravelos, NC State University – Model Integrated Context-aware Static Analysis<\/h2>\n <\/a>\n
\n
\n \n\n

Static Application Security Testing (SAST) tools are vital parts of modern software security providing automatic feedback on potential vulnerabilities that can impact codebases. Unfortunately, for languages such as Javascript and Python, which possess dynamic features (callbacks, anonymous functions, etc), SAST tools lack code context when generating intermediate representation for these languages, leading to high false positives.<\/p>\n\n\n\n

In this work, we present MiCaSa, a system that introduces context-awareness via large language models (LLM) into static analysis tools. MiCaSa improves the reachability of intermediate representations such as call graphs by integrating LLMs into static code analysis. We modified the open-source SAST tool, Joern, and integrated LLM calls into areas where Joern failed to identify function callees appropriately. Our results show that it is possible to utilize LLMs to resolve the dynamic features in modern programming languages that present challenges to SAST tools. We believe that this approach can be extended to other SAST tools like CodeQL to provide enhanced vulnerability detection to a large number of codebases.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Olivera Kotevska, Oak Ridge National Laboratory – Federated Learning in Heterogeneous Environments for Science Applications<\/strong><\/h2>\n <\/a>\n
\n
\n \n\n

Federated Learning (FL) decentralizes the training process, enabling edge-computing devices to collaboratively update a shared model using locally stored data, allowing multiple data owners to train a machine learning model without sharing their individual datasets. This approach presents significant opportunities for privacy-preserving data collaboration among scientific entities. However, most research on FL has focused on simulated environments, with limited exploration into the benefits of supercomputers, communication efficiency, and the privacy-preservation aspects for extreme-scale, massively distributed scientific data on high-performance computing (HPC) platforms. This work presents dynamic sketching strategies to reduce the communication overhead, design of FL benchmark framework on HPC, and scalable privacy-preservation approach.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Jiajia Li, NC State University – High-Performance Sparse Tensor Algebra for AI<\/h2>\n <\/a>\n
\n
\n \n\n

Tensors represent high-dimensional, large-scale data and can be viewed as multi-way arrays, generalizing matrices to more than two dimensions. When used for multifactor data analysis, tensor methods help analysts uncover latent structures, which has led to numerous applications in fields such as healthcare analytics, social network analysis, computer vision, signal processing, and neuroscience, among others. This poster presents algorithmic techniques, data structures, and parallel implementations for building scalable tensor decompositions on various high-performance computing (HPC) platforms, including multicore CPUs, graphics co-processors (GPUs), and distributed systems<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Yixin Li, NC State University – Real-Time Cardiac Monitoring: Integrated R-Peak Detection and Anomaly Detection on Low-Power Systems<\/h2>\n <\/a>\n
\n
\n \n\n

Real-time cardiovascular monitoring is essential for early detection of heart abnormalities, yet implementing such systems on low-power embedded devices poses significant challenges, including computational limitations and data imbalance in anomaly detection. In this work, we present a real-time ECG monitoring system that integrates R-peak detection and anomaly detection, overcoming these challenges through an adapted shifted window approach. By processing smaller segments of ECG signals rather than entire recordings, the system enhances computational efficiency while maintaining high accuracy. Our results demonstrate the system\u2019s effectiveness in running on resource-constrained platforms, providing a scalable and efficient solution for continuous, real-time heart monitoring in wearable healthcare technologies.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Seung-Hwan Lim, Oak Ridge National Laboratory – Attention for Causal Relationship Discovery from Biological Neural Dynamics<\/h2>\n <\/a>\n
\n
\n \n\n

This paper explores the potential of the transformer models for causal representation learning in networks with complex nonlinear dynamics at every node, as in neurobiological and biophysical networks. Our study primarily focuses on a proof-of-concept investigation based on simulated neural dynamics, for which the ground-truth causality is known through the underlying connectivity matrix. For transformer models trained to forecast neuronal population dynamics, we show that the cross attention module effectively captures the causal relationship among neurons, with an accuracy equal or superior to that for the most popular causality discovery method. While we acknowledge that real-world neurobiology data will bring further challenges, including dynamic connectivity and unobserved variability, this research offers an encouraging preliminary glimpse into the utility of the transformer model for causal representation learning in neuroscience.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Yuchen Liu, NC State University – Revolutionizing Modeling and Simulation with LLMs<\/h2>\n <\/a>\n
\n
\n \n\n

The complexity of modern network infrastructure continues to grow, supporting a wide range of interconnected applications. Simulators are indispensable tools in this context, providing cost-effective and risk-free environments for experimentation and development. However, mastering these network simulators demands substantial domain-specific knowledge, even with comprehensive user manuals. Motivated by the capabilities of Large Language Models (LLMs), we introduce the network-oriented LLM as an intermediary between users and simulators, aiming to offer an interactive, automated, and script-free simulation paradigm. Using the NVIDIA Sionna simulator as a case study, we adapt the general-purpose LLM into a network-oriented LLM through joint parameter-efficient fine-tuning and retrieval-augmented generation, which then streamlines the complex simulation process through simple natural language queries. Comprehensive experiments with state-of-the-art LLMs demonstrate that the proposed method can effectively adapt LLMs for use with network simulators, significantly enhancing user-level operational efficiency and accessibility. The proposed pipeline can facilitate the broader development of various network-oriented LLMs, potentially automating a range of complex tasks.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Massimiliano Lupo Pasini, Oak Ridge National Laboratory – Scalable, energy-efficient training of graph neural networks for accurate and stable predictions of atomistic properties<\/h2>\n <\/a>\n
\n
\n \n\n

We present our work on developing and training scalable graph foundation models (GFM) using HydraGNN, a multi-headed graph convolutional neural network architecture. HydraGNN expands the boundaries of graph neural network (GNN) computations in both training scale and data diversity. It abstracts over message passing algorithms, allowing both reproduction of and comparison across algorithmic innovations that define nearest-neighbor convolution in GNNs. This work discusses a series of optimizations that have allowed scaling up the GFM training to tens of thousands of GPUs on datasets that consist of hundreds of millions of graphs. Our GFMs use multi-task learning (MTL) to simultaneously learn graph-level and node-level properties of atomistic structures, such as the total energy and atomic forces. Using over 154 million atomistic structures for training, we illustrate the performance of our approach along with the lessons learned on two state-of-the-art United States Department of Energy (US-DOE) supercomputers, namely the Perlmutter petascale system at the National Energy Research Scientific Computing Center and the Frontier exascale system at Oak Ridge National Laboratory. The HydraGNN architecture enables the GFM to achieve near-linear strong scaling performance using more than 2,000 GPUs on Perlmutter and 16,000 GPUs on Frontier. Hyperparameter optimization (HPO) was performed on over 64,000 GPUs on Frontier to select GFM architectures with high accuracy. Early stopping was applied on each GFM architecture for energy awareness in performing such an extreme-scale task. The training of an ensemble of highest-ranked GFM architectures continued until convergence to establish uncertainty quantification (UQ) capabilities with ensemble learning. Our contribution establishes core capabilities for rapidly developing, training, and deploying further GFMs using large-scale computational resources to enable AI-accelerated materials discovery and design.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Chinmay Mahendra Savadikar, NC State University – Efficient Lifelong Learning and Fine-Tuning: Task Synergies in Vision Transformers and Parameter Generation in Large Language Models<\/h2>\n <\/a>\n
\n
\n \n\n

Pretrained models contain vast amounts of generic information, which can be leveraged for fine-tuning on smaller tasks or lifelong learning across multiple tasks. This work explores explicit methods to achieve this. First, we improve lifelong learning in Vision Transformers by learning task synergies, which update the model\u2019s structure without catastrophic forgetting using neural architecture search. To achieve this, we identify lightweight yet expressive modules for adaptation and propose a Hierarchical Task-Synergy Exploration-Exploitation (HEE) sampling method. Second, for parameter-efficient fine-tuning of Large Language Models (LLMs), we propose generating fine-tuning parameters directly from frozen pretrained parameters. This approach reduces the number of trainable parameters while maintaining performance and interpretability. Our method achieves competitive performance on natural language and computer vision tasks, with fewer parameters than existing methods like LoRA.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Keita Teraneishi, Oak Ridge National Laboratory – Durban: Toward AI-Assisted Programming for High-Performance Computing System<\/strong><\/h2>\n <\/a>\n
\n
\n \n\n

<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Pedro Valero Lara, Oak Ridge National Laboratory – HPC and LLMs<\/h2>\n <\/a>\n
\n
\n \n\n

We evaluate the capabilities of current big Large Language Models (LLMs), such as OpenAI’s ChatGPT or Meta’s Llama, in a set of use-cases targeting critical DOE HPC missions for code generation, automatic translation or code translation. The purpose of this study is to define the fundamental practices and criteria to interact with LLMs for HPC targets to elevate the trustworthiness and performance levels of the AI-generated HPC codes.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Dongkuan (DK) Xu, NC State University – LLM Diagnostic Chatbot for SAP Manufacturing<\/h2>\n <\/a>\n
\n
\n \n\n

Large Language Models (LLMs) have rapidly gained prominence across various fields, including automated manufacturing domains. In the context of SAP (Systems, Applications, and Products), which are widely used tools that help businesses manage finance, logistics, and supply chain, LLMs offer significant potential. These systems manage terabytes of numeric and textual log data generated by sensors and human operators within manufacturing environments. Traditionally, diagnosing issues from such data has required the expertise of several data scientists and domain specialists with strong background knowledge, who manually analyze each problem. Although some machine-learning techniques have been developed to analyze such data, several limitations persist. For example, the results generated by these techniques often require interpretation by analysts, as deriving meaningful insights from the data can be challenging without deep domain knowledge. This reliance on expert interpretation can hinder the efficiency and accessibility of automated data analysis and problem-solving in manufacturing environments. Moreover, traditional natural language processing techniques lack a comprehensive solution for handling massive textual data. These techniques are often limited to basic keyword extraction and string matching, which are insufficient for capturing the full complexity of textual messages in industrial contexts. To harness the capability of LLM-empowered tools, we developed a free-form question- answering chatbot built on an Agentic LLM system, utilizing Synergizing Reasoning and Acting (ReAct) criteria. This system is integrated with custom-defined tools to address the nature of SAP data and a wide range of inquiries from workers in the manufacturing environment. Concretely, user inquiries can be categorized into four types: questions that do not require any contextual information, questions that necessitate database retrieval, questions that require semantic search, and malicious queries aimed at obtaining protected information through prompt injection attacks. To address these types of inquiries, the Agentic LLM is integrated with specialized tools, each tailored to solve a specific category of questions. These tools include an LLM-based SQL translator that operates with the full SAP database schema provided in the system prompt, an SQL engine that executes queries and returns the retrieved results, Retrieval-Augmented Generation (RAG) for semantic search, and an LLM-based final checker to ensure that the final response does not contain unrelated or confidential content.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Haohui Wang, Virginia Tech – Mastering Long-Tail Complexity on Graphs: Characterization, Learning, and Generalization<\/h2>\n <\/a>\n
\n
\n \n\n

In the context of long-tail classification on graphs, the vast majority of existing work primarily revolves around the development of model debiasing strategies, intending to mitigate class imbalances and enhance the overall performance. Despite the notable success, there is very limited literature that provides a theoretical tool for characterizing the behaviors of long-tail classes in graphs and gaining insight into generalization performance in real-world scenarios. To bridge this gap, we propose a generalization bound for long-tail classification on graphs by formulating the problem in the fashion of multi-task learning, i.e., each task corresponds to the prediction of one particular class. Our theoretical results show that the generalization performance of long-tail classification is dominated by the overall loss range and the task complexity. Building upon the theoretical findings, we propose a novel generic framework HierTail for long-tail classification on graphs. In particular, we start with a hierarchical task grouping module that allows us to assign related tasks into hypertasks and thus control the complexity of the task space; then, we further design a balanced contrastive learning module to adaptively balance the gradients of both head and tail classes to control the loss range across all tasks in a unified fashion. Extensive experiments demonstrate the effectiveness of HierTail in characterizing long-tail classes on real graphs, which achieves up to 12.9% improvement over the leading baseline method in accuracy.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n

\n \n \n <\/span>\n <\/span>\n

Dawei Zhou, Virginia Tech – Large Foundation Model Development and Adaptation for Metamaterial Design<\/h2>\n <\/a>\n
\n
\n \n\n

Metamaterials, characterized by unique properties stemming from their designed structures rather than chemical compositions, have offered possibilities not attainable with traditional materials and emerged as a frontier in disruptive technologies across various domains, such as sensing, information technology, infrastructure, and transportation. However, the design of metamaterials heavily relies on human-centric concepts, expertise, and inspiration. This knowledge-intensive process poses a significant barrier for engineers and technicians designing metamaterials tailored to their requirements. In this talk, I will introduce METASCIENTIST, an autonomous computational system aiming to synthesize the knowledge related to metamaterial design and efficiently generate novel hypotheses at scale. In particular, I will hinge on the key metamaterial applications and discuss our recent work on long-tailed hypothesis generation and non-IID individual calibration. Finally, I will conclude this talk and share thoughts about my future research.<\/p>\n\n\n <\/div>\n <\/div>\n <\/div>\n\n\n <\/div>\n <\/div>\n","protected":false},"excerpt":{"rendered":"

Speakers Keynote Speakers Mansoor Haider, Professor, Department of Mathematics, NC State University Forrest M. Hoffman, Distinguished Computational Earth System Scientist, Oak Ridge National Laboratory Jian Pei, Arthur S. Pearse Distinguished…<\/p>\n","protected":false},"author":1156,"featured_media":0,"parent":31562,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"source":"","ncst_custom_author":"","ncst_show_custom_author":false,"ncst_dynamicHeaderBlockName":"ncst\/default-header","ncst_dynamicHeaderData":"{\"pageIntro\":\"October 9-10, 2024<\/strong> | Park Alumni Center, NC State University<\/strong><\/strong>\"}","ncst_content_audit_freq":"","ncst_content_audit_date":"","ncst_content_audit_display":false,"ncst_backToTopFlag":"","footnotes":"","_links_to":"","_links_to_target":""},"tags":[],"class_list":["post-33517","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"\nOak Ridge National Laboratory's Core Universities AI Workshop - Office of Research and Innovation<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Oak Ridge National Laboratory's Core Universities AI Workshop - Office of Research and Innovation\" \/>\n<meta property=\"og:description\" content=\"Speakers Keynote Speakers Mansoor Haider, Professor, Department of Mathematics, NC State University Forrest M. Hoffman, Distinguished Computational Earth System Scientist, Oak Ridge National Laboratory Jian Pei, Arthur S. Pearse Distinguished…\" \/>\n<meta property=\"og:url\" content=\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/\" \/>\n<meta property=\"og:site_name\" content=\"Office of Research and Innovation\" \/>\n<meta property=\"article:modified_time\" content=\"2024-09-27T13:02:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"27 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/\",\"url\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/\",\"name\":\"Oak Ridge National Laboratory's Core Universities AI Workshop - Office of Research and Innovation\",\"isPartOf\":{\"@id\":\"https:\/\/research.ncsu.edu\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg\",\"datePublished\":\"2024-06-14T13:37:21+00:00\",\"dateModified\":\"2024-09-27T13:02:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#primaryimage\",\"url\":\"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg\",\"contentUrl\":\"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/research.ncsu.edu\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Research Initiatives\",\"item\":\"https:\/\/research.ncsu.edu\/initiatives\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Oak Ridge National Laboratory’s Core Universities AI Workshop\",\"item\":\"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Oak Ridge National Laboratory’s Core Universities AI Workshop\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/research.ncsu.edu\/#website\",\"url\":\"https:\/\/research.ncsu.edu\/\",\"name\":\"Office of Research and Innovation\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/research.ncsu.edu\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Oak Ridge National Laboratory's Core Universities AI Workshop - Office of Research and Innovation","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/","og_locale":"en_US","og_type":"article","og_title":"Oak Ridge National Laboratory's Core Universities AI Workshop - Office of Research and Innovation","og_description":"Speakers Keynote Speakers Mansoor Haider, Professor, Department of Mathematics, NC State University Forrest M. Hoffman, Distinguished Computational Earth System Scientist, Oak Ridge National Laboratory Jian Pei, Arthur S. Pearse Distinguished…","og_url":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/","og_site_name":"Office of Research and Innovation","article_modified_time":"2024-09-27T13:02:27+00:00","og_image":[{"url":"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"27 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/","url":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/","name":"Oak Ridge National Laboratory's Core Universities AI Workshop - Office of Research and Innovation","isPartOf":{"@id":"https:\/\/research.ncsu.edu\/#website"},"primaryImageOfPage":{"@id":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#primaryimage"},"image":{"@id":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#primaryimage"},"thumbnailUrl":"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg","datePublished":"2024-06-14T13:37:21+00:00","dateModified":"2024-09-27T13:02:27+00:00","breadcrumb":{"@id":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#primaryimage","url":"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg","contentUrl":"https:\/\/math.sciences.ncsu.edu\/wp-content\/uploads\/sites\/27\/2017\/05\/Haider.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/speakers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/research.ncsu.edu\/"},{"@type":"ListItem","position":2,"name":"Research Initiatives","item":"https:\/\/research.ncsu.edu\/initiatives\/"},{"@type":"ListItem","position":3,"name":"Oak Ridge National Laboratory’s Core Universities AI Workshop","item":"https:\/\/research.ncsu.edu\/initiatives\/ornlcuaiws24\/"},{"@type":"ListItem","position":4,"name":"Oak Ridge National Laboratory’s Core Universities AI Workshop"}]},{"@type":"WebSite","@id":"https:\/\/research.ncsu.edu\/#website","url":"https:\/\/research.ncsu.edu\/","name":"Office of Research and Innovation","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/research.ncsu.edu\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/pages\/33517"}],"collection":[{"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/users\/1156"}],"replies":[{"embeddable":true,"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/comments?post=33517"}],"version-history":[{"count":50,"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/pages\/33517\/revisions"}],"predecessor-version":[{"id":34667,"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/pages\/33517\/revisions\/34667"}],"up":[{"embeddable":true,"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/pages\/31562"}],"wp:attachment":[{"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/media?parent=33517"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/research.ncsu.edu\/wp-json\/wp\/v2\/tags?post=33517"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}