The Stanford Quantitative Imaging Laboratory (QIL) conducts research to extract and use objective information in images (quantitative measurements and contrtolled qualitative observations) in machine-processible form for radiology discovery and clinical applications. For discovery, we aim to identify novel imaging biomarkers of disease for diagnosis, prediction, and assessment of treatment response. For clinical applications. we are developing decision support applications that leverage quantitative information to reduce variation in practice and to improve the accuracy of physicians in making a diagnosis and their sensitivity in detecting disease response to therapy. Work in QIL thus spans a spectrum of activities ranging from basic imaging and informatics science to translational research and clinical service in which quantitative iamging-based discoveries are operationalized and evaluated in clinical practice to transform the practice of medicine.
Current QIL projects:
QIL recently received a U01 award from NCI to develop informatics methods to transform clinical trial research. Imaging is crucial for assessing patients with cancer and for monitoring their response to treatment. However, current methods for quantifying the amount of tumor in the body, whether it is biologically-active, and whether it is optimally responding to treatment are limited to simplistic measurements that are inaccurate and subject to inter-observer variation. In addition, the workflow for capturing quantitative information from images is difficult, time-consuming, and costly. Our goal is to tools to automate and streamline to process of identifying, measuring, and assessing the amount of tumor in patients and enable oncologists to readily determine the response of individual patients and cohorts to a variety of cancer treatments.
The National Cancer Institute recently funded a number of national centers to establish the Quantitative Imaging Network (QIN). The Stanford QIL is a participating site in QIN, and we are creating computer algorithms to reproducibly and accurately measure tumor burden in patients. We are also developing methods to identify and quantify novel imaging biomarkers that can provide earlier indications of treatment response to cancer therapies. Our vision is to use computer methods to robustly assess treatment response in patients, to warehouse image data to enable researchers to compare many different imaging biomarkers collected in different types of cancers, and to enable establishing the best method of using imaging to assess treatment response in different cancers.
Radiological images contain a wealth of information, such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats, enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.
iPAD permits researchers to describe the semantic information in images in a manner that fits within the research workflow could help them to collect the necessary structured image data. This information is stored in compliance with standards developed by the National Cancer Institute's Annotation and Image Markup (AIM) standard format for image metadata.
Radiology data is fragmented, being stored in a variety of information systems in the medical enterprise: the PACS contains images, radiology reports are in the RIS, pathology reports and other clinical data are in the EMR, and molecular data pertinent to disease are stored in a variety of institutional databases with few standards for data exchange. The result is that it is challenging to link and discover novel imaging biomarkers of disease and treatment response.
We have developed an integrated data warehouse which brings together radiology reports with pathology and clinical information (and in the future molecular data) to enable researchers, clinicians, and educators to find cases of particular imaging findings, diagnoses, modalities, and other information. The resource (Radbank) has been used to identify teaching cases, to perform retrospective research, and to identify cohorts for new research.
Ultimately, this resource will inform decision support modules, providing evidence-based just-in-time information for practicing radiologists to render more accurate diagnoses.
Radiologists interpret images on commercial Picture Archiving and Communications Systems (PACS), the functionality of which is currently focused on retrieving and displaying imagage of particular patients. While this technology has advanced the workflow of image interpretation, there are substantial opportunities to leverage the computer processing capability of PACS and the massive image archives which they contain to deliver decision support to radiologists. The future PACS will be a "portal" to radiological knowledge--the literature, electronically accessible data pertinaining to diseases, and even functionality to enable radiologists to find similar images to those under review to help them with their diagnoses. Incorporation of quantitative imaging methods for more robust evaluation will also be incorporated. The key initial advance in these systems will be integration of a range of clinical, pathological, and even molecular data to provide radiologists and treating physicians with a more complete picture of the patient than solely the images currently provide. We are currently develolping the functionalities of these future intelligent workstations in several projects ongoing in the lab, including quantitative imaging, image annnotation, natural language processing, and decision support.
Images, in particular medical and scientific images, contain vast amounts of information. While this information may include metadata about the image, such as how or when the image was acquired, the majority of image information is encoded in the images pixels; however, information about how images are perceived by human or machine observers is not currently captured in a form that is directly tied to the images. A wealth of information pertaining to image content is thus disconnected from the images, limiting the value of radiology imaging to be related to other non-imaging data. We need tools that will allow both human and machine image annotations to be created and stored in a standard format that is syntactically and semantically interoperable with the infrastructure with other biomedical resources while supporting standards such as DICOM, HL7 and those being created by the W3C semantic Web community.
We are developing methods to describe the semantic content in images using ontologies, explicit representations of the entities and relations in biomedicine. We are also creating tools to compose ontology-based descriptions of image content and associate them with images. This work will change the paradigm of medical imaging instead of clinical systems storing just pixels, they will store image data plus the image meaning. This will enable a broad range of computational analytic functionality, including semantic search (see IQ Project), integration of image- and non-image data, statistical modeling of disease, and intelligent decision support applications for image-based personalized care.
The number of images in Radiology is exploding. Diagnostic radiologists are confronted with the challenge of efficiently and accurately interpreting cross sectional imaging exams that often now contain thousands of images per patient study. Currently, this is largely an unassisted process, and a given reader's accuracy is established through training and experience. There is significant variation in interpretation between radiologists, and accuracy varies widely, a problem compounded by increasing image numbers. There is an opportunity to improve diagnostic decision making by enabling radiologists to search databases of radiological images and reports for cases that are similar in terms of shared imaging features to the images they are interpreting.
We are creating software tools that can be used to create and to search databases of radiological images based on image features, which include detailed information about lesions: (1) feature descriptors coded by radiologists using RadLex, a comprehensive controlled terminology, and (2) computer-generated features of pixels characterizing the lesion's interior texture and the sharpness of its boundary.
Our goal is to develop methods to facilitate the retrieval of radiological images that contain similarly appearing lesions. We are currently developing a CBIR system in CT images of the liver.
A vast amount of imaging-related information is locked away in unstructured free-text. Ideally, all radiology information would be collected in structured format ("structured reporting"). We are creating applications to enable structured capture of image data (iPAD). However, much of radiology reporting is currently (and previously) unstructured, and in free text. Our goal is to develop computer methods to extract key radiology information from free text to enable mining radiology information in combination with quantitative image data. Ultimately, we envision that structured radiology knowledge will be derived and mined from vast collections of images and reports.
Radiology interpretation is challenging because there are many different imaging features and variation in how different radiologists combine the evidence of multiple combinations of these features into a decision about diagnosis or patient management (e.g., should this patient undergo biopsy?) Such decisions can potentially be improved through artificial intelligence methods such as Bayesian networks.
We are currently pursuing a number of projects to create models relating observed radiology imaging features to the possible diagnoses and decision points. We are developing these models to improve the diagnosis of breast cancer, to evaluate whether negative results of biopsy could be due to sampling error, and to help radiologists evaluate the malignant potential of thyroid nodules.
DICOM (Digital Imaging and Communications in Medicine) is the global standard for medical image information; a position it has held for over 20 years. It is pervasive throughout the medical imaging community, and nearly every medical imaging device supports some aspect of the standard. DICOM models the image acquisition process and information objects related to imaging, and it specifies how the image data, the metadata, and related objects are represented in a binary format. For example, DICOM models patients as both clinical and clinical trial subjects, imaging studies that consist of series of images as well as all of the technical parameters of imaging modalities. Despite its size and complexity, DICOM lacks a Reference Information Model of the imaging domain. A reference information model is a formal description of a domain that enables users to share consistent meaning and establish semantic interoperability beyond a local context.
There is a pressing need for an information model of imaging based on DICOM to enable the community to create intelligent imaging-based applications that are interoperable. We are developing the DICOM Ontology (DO), an ontology that will be a single common reference information model for the imaging domain. The DO will be analogous to the Gene Ontology (GO) and serve a similar role in radiology that GO serves in biology. The DO will unify and make explicit all the key entities and relations in DICOM in a human-usable and machine-processable format. The DO will ultimately become a reference ontology—one that comprehensively represents knowledge about the medical imaging domain independent from specific objectives or applications, guided by a theory of the imaging domain and by robust ontology design principles that encourages reuse.
Image query is a critical functional component of systems that integrate biomedical images with non-image data. Query tools are vital in order for them to find and retrieve information in all bioinformatics databases. Many biomedical repositories are accruing a wealth of images, such as the National Cancer Imaging Archive (NCIA) and the American College of Radiology Imaging Network (ACRIN), which are building image collections from diverse clinical trials. The current repositories provide the research community technologies to federate data archives, but techniques are needed to permit researchers to explore the various resources, pose questions, correlate image data with related non-image data, and formulate new hypotheses and research directions. There is an emerging need for intelligent image query tools to enable users to search the image resources in an intuitive way. Our goal is to create image query tools to help users create queries that exploit the capability of biomedical ontologies to enable search for images that are annotated using these knowledge sources.
In the IQ project, we will develop semantic methods for searching for annotated images. We will address these challenges: (1) Complexity if image content and semantics, (1) Relating radiology imaging to other non-imaging data, and (3) Terminology challenges of synonymy and polysemy. We will address these challenges by creating an ontology to support image query. An ontology is an explicit knowledge representation that specifies the entities and relations among those entities in a domain in a human-readable and machine-processable format. We will create methods to permit users to search for images based ontology terms, and the ontologies will also be used to expand user queries. We will also develop an intuitive interface to accessing the ontologies and composing queries.