Recent researches shows achieving high interpretation performance demands annual reading volumes between 4000 and 10000 mammograms while in Canada the number of mammograms required to remain an accredited radiologist is 1000.

Interpretation of mammography images is a challenging task because these images have high resolution and cancer lesions are small, sparsely scattered over the breast.

 

Remarkable success of deep convolutional neural networks in visual object recognition and detection, and many other domains has made them the fundamental building blocks of next generation computer aided diagnosis/detection systems.

Recent researches shows achieving high interpretation performance demands annual reading volumes between 4000 and 10000 mammograms while in Canada the number of mammograms required to remain an accredited radiologist is 1000.

Interpretation of mammography images is a challenging task because these images have high resolution and cancer lesions are small, sparsely scattered over the breast.

 

Remarkable success of deep convolutional neural networks in visual object recognition and detection, and many other domains has made them the fundamental building blocks of next generation computer aided diagnosis/detection systems.

The interpretation tool leverages two convolutional neural networks (CNNs) in its core model using which it imitates the approach of a radiologist in analyzing and interpreting a mammography image. In other words, it employs a coarse, yet low footprint, network on the entire image to determine the most suspicious regions. In its second stage, it utilizes another fine-grained network to extract details from chosen regions. The final stage is a fusion module that aggregates global and local information to make a prediction.

The interpretation tool leverages two convolutional neural networks (CNNs) in its core model using which it imitates the approach of a radiologist in analyzing and interpreting a mammography image. In other words, it employs a coarse, yet low footprint, network on the entire image to determine the most suspicious regions. In its second stage, it utilizes another fine-grained network to extract details from chosen regions. The final stage is a fusion module that aggregates global and local information to make a prediction.

The interpretation tool leverages two convolutional neural networks (CNNs) in its core model using which it imitates the approach of a radiologist in analyzing and interpreting a mammography image. In other words, it employs a coarse, yet low footprint, network on the entire image to determine the most suspicious regions. In its second stage, it utilizes another fine-grained network to extract details from chosen regions. The final stage is a fusion module that aggregates global and local information to make a prediction.

The WisdomX platform is designed to seamlessly integrate with current client workflows. An edge device runs the processing locally and connects to any picture archiving and communication system (PACS). The patient’s study is accessed from the PACS, sent to the edge device for processing, then processing results are returned to the viewer.

The WisdomX platform is designed to seamlessly integrate with current client workflows. An edge device runs the processing locally and connects to any picture archiving and communication system (PACS). The patient’s study is accessed from the PACS, sent to the edge device for processing, then processing results are returned to the viewer.