Researchers Develop Optical Tomography System with BitFlow Frame Grabber to Better Diagnosis Eye Diseases

WOBURN, MA, MARCH 11, 2022 — High resolution 3D imaging of biological tissue is used extensively in the diagnosis of eye diseases, typically by applying a technique known as Optical Coherence Tomography (OCT). OCT testing has become a standard of care for the assessment and treatment of most retinal conditions. It is comparable to ultrasound, except that OCT employs light rather than sound and thereby achieves clearer, sharper resolution.
In a typical OCT system, an optical signal from a broadband source is divided into sample-arm and reference-arm signals using a beam splitter. Both signals are combined and an interference signal is detected by a detector assembly. Some systems employ a wavelength-tuning optical source and are termed “swept source” OCT (SS-OCT). Meanwhile, a system where a stationary broadband signal is dispersed spatially and detected using a spectrometer is referred as a Fourier Domain OCT (FD-OCT).
Both SS-OCT and FD-OCT techniques suffer from changes in the polarization of the optical signal when the signal is transmitted through materials possessing anisotropic properties, meaning they have a different value when measured from different directions. This results in artifacts that compromise the quality of the image, and therefore, the ability of doctors to diagnose a disease.
Reducing Polarization Artifacts Funded by Max-Planck-Gesellschaft and Massachusetts General Hospital, a team of researchers have developed a polarization insensitive detection unit (PIDU) for a spectrometer-based FD-OCT system that greatly minimized polarization associated artifacts in OCT images. The spectrometer unit employed diffraction grating (set at 1200 lines per mm), an 80mm lens, and a Sensors Unlimited InGaAs line-scan camera with a resolution of 2048 pixels.
Data from the spectrometer was collected at a line-scan speed of 100kHz utilizing a BitFlow Axion-CL Camera Link frame grabber. The Axion-CL supports a single Base CL camera, Power over Camera Link (PoCL), and can acquire up to 24 bits at 85MHz. The frame grabber benefits from a PCIe Gen 2 interface and a DMA optimized for fully loaded computers. Data collected by the Axion-CL was processed on LabVIEW software.
To demonstrate the proof of principle in biological tissue the researchers imaged chicken breast because of its high birefringence. Tests were conducted on the OCT system with and without the PIDU. During the imaging, the tissue was held in hand and maneuvered constantly to mimic real clinical conditions. Images were acquired and recorded for 10 seconds.
For the OCT system without PIDU, it was observed that the bright and dark bands of the sample were constantly fluctuating which can be attributed to the polarization dependent phase changes in the sample light. The OCT system with PIDU, however, showed that the image artifacts were not noticeable, making for images that are more accurate for a doctor to observe. Researchers found that in close examination, it was not only the light from the tissue that changes in intensity but also the light from the inner wall of the capsule which is not in tissue contact. This supports the idea that polarization artifacts come not solely from a tissue sample, but can also arise from the system itself.
The researchers believe their new design will be particularly useful in clinical settings where the sample arm is constantly under motion during probe introduction or when it is subjected to peristaltic motion. Further studies are planned on other biological tissues.
David Odeke Otuya, Gargi Sharma, Guillermo J. Tearney, and Kanwarpal Singh, “All fiber polarization insensitive detection for spectrometer based optical coherence tomography using optical switch,” OSA Continuum 2, 3465-3469 (2019)
Schematic of the FD-OCT system employing polarization insensitive detection scheme is shown. SMF: Single mode fiber, Cr: circulator, BS: beam splitter, PC: polarization controller, Co: collimator, NDF: neutral density filter, M: mirror, MPU: motor power unit, EC: electrical connection, MW: motor wire, PBS: polarizing beam splitter, OS: optical switch, G: grating, L: lens, LSC: line scan camera (Image courtesy of Otuya, Sharma, Tearney, and Singh)
(left) Image of chicken breast tissue acquired with OCT system without PIDU and (right) image of the same tissue acquired with OCT system with PIDU (Image courtesy of Otuya, Sharma, Tearney, and Singh)

BitFlow BFPython API provides Python Wrapper to Enable Rapid Prototyping

WOBURN, MA, FEBRUARY 23, 2022 — BitFlow, a global leader in frame grabbers for machine vision, life sciences and industrial imaging, has introduced BFPython, an application programming interface that allows engineers with Python expertise to acquire images from BitFlow’s broad range of frame grabbers. Available immediately, these Python bindings wrap the BitFlow SDK’s configuration, acquisition, buffer management and camera control APIs. The download also includes several Python examples that illustrate how bindings can be used.

A free, open source programming language, Python is simple to learn and use, making it one of the most popular languages for developing imaging applications, whether in Linux, Windows or embedded platforms. In machine vision, where prototyping is mission critical to understanding how a proposed imaging solution is progressing, BitFlow BFPython accelerates the building process and reduces final development costs for those experienced with Python code. To further assist in development, BFPython includes several sub-modules that provide convenient interfaces to access features such as CoaXPress camera control (via GenICam), Camera Link camera control (via the CL Serial API), among others. 

Supporting the full line of BitFlow frame grabbers, the BitFlow SDK enables developers to bring high-speed image acquisition into machine vision applications, from cost-efficient simple inspection to ultra high-speed, high-resolution systems. The SDK includes a large number of example applications with full source code for aiding in the understanding the available functions, along with a number of utilities for developing and debugging. The free SDK version is for use with third-party applications such as LabVIEW, VisionPro and HALCON. The paid version is required for users developing their own applications, and offers such high-level advantages as header files, libraries and extensive example programs with detailed source code. 

BitFlow Frame Grabber Helps Researchers Generate 3D Structural Images of Biological Tissues

WOBURN, MA, JANUARY 24, 2022 — Biology researchers at Indiana University1 have developed an integrated system combining high-resolution optical coherence microscopy (HR-OCM) with dual-channel scanning confocal fluorescence microscopy (DC-SCFM) to enable 3D visual evaluation of cell activities involved in pupil developmental and disease conditions. Still in its experimental stages, this dual-modality 3D system simultaneously co-registers reflectance and fluorescence signals, giving it the ability to accurately track structural and functional changes in live specimens over time. Indiana University researchers hope to use their system to enable new investigations of biological processes in small animal models.
BitFlow Axion Camera Link frame grabber is a critical component of the hybrid system. It acquires the output signal from a spectrometer equipped with a Teledyne e2v high-speed line-scan camera operating at the rate of 250 kHz. A lateral resolution of 2-μm and axial resolution of 2.4-μm is captured in tissue over a field-of-view of 1.1 mm ×1.1 mm. The analog scanning signals, as well as the trigger signals for the BitFlow frame grabber, are generated synchronously through a four-channel analog output data acquisition card. Simultaneous recording of HR-OCM and DC-SCFM data was performed using custom software developed in LabVIEW 2017.
As data generated by faster, higher-resolution Camera Link cameras continues to grow exponentially, the Axion’s PCIe Gen 2 interface, with its StreamSync™ DMA optimized for modern computers, is needed to optimize their full performance. Features such as easier switching between different tap formats, a powerful acquisition engine, and a more flexible I/O and timing generator are all readily available in a dedicated low cost CL Base orientated frame grabber.
During development, researchers applied different strategies to enable the simultaneous recording of information, as well as to overcome the focal plane mismatch between both imaging modalities. The system’s performances were evaluated in imaging fluorescence microspheres embedded in multi-layer tape and silicone phantom. 
The combined system is synergistic in generating structural and functional information of samples; the DC-SCFM allows for the discrimination between different fluorophores, while the HR-OCM enables the 3D localization of the features inside tissue samples and enabled the depth localization.

1 “Development of high-speed, integrated high-resolution optical coherence microscopy and dual-channel fluorescence microscopy for the simultaneous co-registration of reflectance and fluorescence signals” Reddikumar Maddipatla, PatriceTankam School of Optometry, Indiana University, Bloomington, IN 47405, USA

System diagram

How BitFlow Overcame Supply Chain Constraints to Maintain Frame Grabber Availability

Dec. 15, 2021 – Supply chain disruptions, chip shortages and the rising cost of raw materials continue to bedevil the machine vision industry. System integrators are confronted with an experience once rare: no stock available at distributors, and no idea when cameras, frame grabbers, computers and other necessary accessories will come in.
 
Bucking this trend is BitFlow, Inc., a global manufacturer of CoaXPress and Camera Link frame grabbers headquartered outside Boston. At the onset of the pandemic, the company’s management began to take steps to mitigate the impact of shortages on production. BitFlow’s close relationships with its trusted suppliers and partners of over 25 years enabled it to build in-house stock when shortages started becoming apparent. For example, Altera FPGAs (Field Programmable Gate Array) are key components of its cards. An FPGA is an integrated circuit with modifiable logic and memory blocks configurable in the field that help BitFlow customers reduce cost and improve design cycle times. To ensure a steady supply of FPGAs, BitFlow met with Altera to establish forecasting and lead times, ultimately resulting in purchasing additional stock.
 
Donal Waide, director of sales for BitFlow, notes that the company is experiencing stronger than expected demand yet has continued to reliably deliver its industry-leading products. “In the short term, lead times may, in some cases, be longer than we would like. For the most part, however, we have boards on hand and are able to go into new projects, whereas many of our competitors are struggling to meet their production goals.”
 
Although there are no silver bullets, Waide points to several steps BitFlow took to stay ahead of supply chain disruptions:

  • ​Source alternative suppliers, and review your existing suppliers for common components.
  • Increase collaboration and visibility with suppliers to determine their technology roadmaps so you can evolve product designs accordingly. Suppliers may drop the mature, less-profitable components used in your legacy products in favor of newer technologies with higher margins.
  • When reviewing suppliers, consider their global footprint and their ability to mitigate disruptions in one region with manufacturing capabilities in another.
  • Remember that during shortages, suppliers determine who to support, not the other way around, underscoring the importance of strong relationships.
  • If possible, move away from single-sourced parts. 

Ironically, strained global supply chains have led to heightened demand for machine vision systems as manufacturers struggle to maximize yield from scarce components. Manufacturers are turning to machine vision to track goods throughout the production process with quality control measures that minimize waste and ensure compliance with customer and regulatory standards. In addition, with both parts and labor in short supply, plant maintenance is another area where machine vision is adding value, in particular when combined with AI. Cameras can be used to identify vibration issues before a failure occurs, for instance, or to detect wear on conveyor belts or leakage in remote pipes. Monitoring can be performed without service personnel shutting down a production line, and opening cabinets or enclosures to check mechanical parts for wear. Keeping lines running yields an immediate production increase.

Laser Scanning System uses BitFlow Frame Grabber to Improve Driver Visibility on Foggy Roads

WOBURN, MA, NOVEMBER 15, 2021 — Fog is produced by the suspension of very fine moisture droplets in the air. When light hits these droplets, it scatters and results in a loss of contrast and a dense white background. As these droplets get smaller, fog gets thicker and makes roadways more blanketed, reducing visibility, limiting contrast, and distorting the perception of speed. Reports from the Federal Highway Administration cite an annual average of 31,385 fog-related car accidents resulting in more than 500 deaths.

To help drivers achieve improved visibility through fog, researchers1 from Purdue University and the University of Science and Technology of China developed an experimental off-axis spatiotemporally gated multimode laser scanning system. Extensive testing has shown the system yields high-quality images at seven scattering path lengths, which far exceeds the capability of conventional imaging solutions, such as LIDAR that typically lacks the spatial resolution and contrast of optical measurement.

During testing, image capture was performed using a Photonfocus 2-megapixel CMOS camera with full-well capacity recording at 128 x 118 pixel resolution to simulate pupil plane detection. The camera was configured for external exposure control mode so that the external trigger signal controlled both the exposure start and duration. Using a region of interest containing the 128 × 118 pixels, researchers achieved 4 kHz frame rate with 50% duty cycle. These images were continuously transferred to a computer memory through a BitFlow Neon CLB Base/PoCL Camera Link frame grabber. Featuring PoCL, this board can acquire from all Base CL cameras up to 24 bits at 85 MHz and has enough industrial I/O to handle even the most complicated synchronization tasks.

Image quality was evaluated by placing a flat wood deer shape figurine inside a rectangle glass tank filled with water and subject to different levels of scattering. Milk gradually was added into the water tank while scattering path length was measured. Researchers utilized a 592 nm diode laser source of 7 mm coherence length and employed hologram recording to achieve temporal gating.

To adapt the system for practical implementation on motor vehicles, researchers plan to abandon laser interferometry and directly employ a nanosecond pulsed light source and electronic gating on the detected signal as in LIDAR imaging. Also, they will locate an illumination module and detection module on each side of a vehicle using two separated synchronized beam scanners that will scan a common focus.

No Fish Story: BitFlow Frame Grabber Optimizes Hyperspectral Imaging System Assessing Salmon Health

WOBURN, MA, SEPTEMBER 21, 2021 — Smoltification is a complex series of physiological changes that allow young Atlantic salmon to adapt from living in fresh water to living in seawater. In salmon farming, this transition from “parr” to “smolt” is controlled using lights or functional feed to ensure a continuous and predictable supply of fish to grocery stores, restaurants and other seafood markets.


Scientists at SINTEF, one of Europe’s largest independent research institutes located in Trondheim, Norway, recently developed a hyperspectral imaging (HSI) system1 to study the vital aspects in detecting smoltification, relying in part upon a BitFlow Camera Link frame grabber to grab high-speed video frames for analysis at more than 100 frames-per-second.

The ability to verify smoltification is critical since incomplete seawater adaptation may result in poor animal welfare and increased mortality. Animal welfare is of increasing importance in salmon farming, as the industry is under pressure to improve production and farming operations due to ethical concerns. Conventional smoltification assessments measure chloride content in blood samples after exposing fish to saline water, or by detecting the presence of ion-transporting enzymes through analysis of tissue samples from gills. These methods are time-consuming so only a few salmon are typically tested from populations of several hundreds of thousands of fish.


To evaluate the robustness of its HSI approach, SINTEF placed emphasis on collecting diverse data with variations in fish color, patterning, size, and shape using three different salmon farming sites. Data were collected weekly in synchronization with the sites’ respective production and testing schedules. A Shuttle SH110G computer with Intel i7 processor had the BitFlow frame grabber installed to grab frames from a Specim® FX10 hyperspectral camera (Figure 1) equipped with a 23 mm/f.2.4 (OLE23) lens. Exposure settings were regularly adjusted depending on local conditions and the state of the fish. And because smolt transition involves salmon becoming more reflective, shutter speed was adjusted to keep the exposure within the sensor’s dynamic range. To make all data sets comparable, despite differences in ambient lighting conditions and exposure settings, all were normalized for comparison using white and dark reference images.


The raw data obtained from HSI were multidimensional images of individual fish, including their background. Each layer of this multidimensional image represented a single gray-scale image corresponding to the intensity of the reflectance measurement at a specific wavelength. When stacked, all the layers and reflectance measurements represented a 3D cube (Figure 2). A step-wise procedure was used to process and analyze the data so the low-dimensional spectral characteristics could be observed, and classification of parr or smolt made possible. Wavelengths were optimized by factoring in water temperature, dissolved oxygen, water opacity, and color, as well as lighting and feeding regimes.


Upon conclusion of its study, SINTEF demonstrated a HSI system where only three wavelengths are needed to identify smoltification status of Atlantic salmon, and that this system could serve either as a supplementary or free-standing verification tool in fish production. In doing so, the researchers also laid a pathway to manufacturing low-cost HSI instruments for use in production tanks or integrated in existing sorting and vaccination systems for faster, wider and more cost-effective population sampling of Atlantic salmon.

BitFlow Returns to VISION Trade Show in Stuttgart, Germany

WOBURN, MA, SEPTEMBER 15, 2021 — BitFlow, a leading innovator in frame grabbers for industrial and commercial imaging applications, today announced details of its in-person participation in VISION, the world’s leading trade fair for machine vision, to be held in Stuttgart, Germany, October 5 to 7, 2021. BitFlow encourages VISION attendees to visit booth #10H46 to engage with its trained engineers and see firsthand its newest CoaXPress and Camera Link frame grabbers.


Because VISION could not be held last year due to the pandemic, its relaunch is a positive signal to the international machine vision market. More than 250 companies will take part in VISION, providing visitors with a current overview of the wide range of machine vision products, software and services now available, together with global insights into future technologies.


“Coronavirus halted VISION and other industry events in 2020, making us all acutely aware of the value of face-to-face, in-person business meetings,” said Donal Waide, Director of Sales, BitFlow, Inc. “Keeping connected and moving forward with events like VISION is critically important for economic recovery. We are very excited to get back on the road to meet with our customers, colleagues and distribution partners.”


BitFlow will showcase at VISION its entire portfolio designed to meet the most demanding imaging needs within diverse industries such as machine vision, quality control, defense, medical research and robotics. In addition, BitFlow will join with several of its camera partners to present live demonstrations of its high-speed frame grabbers, including the new line of fan-cooled Cyton CXP4-V CoaXPress models engineered for use with small form-factor fanless computers, like the NVIDIA® Jetson Xavier Developer Kit.

BitFlow CoaXPress Frame Grabber Used in Groundbreaking New 3D Imaging System

WOBURN, MA, JULY 16, 2021 — Research scientists with the Energy Materials Telecommunications Center, National Institute for Scientific Research in Quebec, Canada, have developed a groundbreaking technique to acquire 3D images at over 1000 frames per second with resolution as high as 1180 x 860 — far beyond the capabilities of available systems today — by eliminating information redundancy in data acquisition. Certain to open new opportunities for 3D applications, the dual-view band-limited illumination profilometry (BLIP) with temporally interlaced acquisition (TIA) system or simply BLIP-TIA, relies upon the BitFlow Cyton-CXP CoaXPress frame grabber to transmit images from two CMOS cameras to a computer for processing at rates surpassing 12.5 Gb/S.

Existing 3D systems based on the widely used technique of Fringe Projection Profilometry (FPP) have two main limitations. First, each camera captures the full sequence of fringe patterns, therefore imposing redundancy in data acquisition that ultimately clamps the systems’ imaging speeds. Second, the cameras are placed on different sides of a projector. This arrangement often induces a large intensity difference from the directional scattering light and the shadow effect from the occlusion by local surface features, both of which reduce the reconstruction accuracy.

To overcome these limitations, the scientists developed BLIP-TIA with a new algorithm for coordinate-based 3D point matching from different views. Implemented with two cameras from Optronis placed side-by-side and the BitFlow Cyton-CXP frame grabber, it allows each camera to capture half of the sequence of the phase-shifted patterns, reducing the individual camera’s data transfer load by 50%, and freeing up capacity to transfer data from more pixels on each camera’s sensor or to support higher frame rates.

Besides its high-speed data transfer, the Cyton-CXP two-channel frame grabber incorporates the Gen 2.0 x8 PCI Express bus interface on its back-end for high speed access to host memory in multi-camera systems such as the BLIP-TIA. It also allows control commands, triggers and power to be sent to and from cameras over the same coaxial cable to simplify overall design.

To verify high-speed 3D surface profilometry, the researchers used BLIP-TIA in a number of tests including recording non-repeatable 3D dynamics by imaging the process of glass breaking while being struck by a hammer. The growth of cracks and the burst of fragments with different shapes and sizes were clearly shown in the reconstructed 3D images.

Besides technical improvements, researchers are now exploring new applications for BLIP-TIA. For example, it could be integrated into structure illumination microscopy, frequency-resolved multidimensional imaging, dynamic characterization of glass in its interaction with the external forces, recognizing hand gestures for human–computer interaction, in robotics for object tracking and reaction guidance, vibration monitoring in rotating machinery, and for behavior quantification in biological science.

BitFlow Frame Grabber Aids in Development of Compact, Cost-effective Subsurface Fingerprint System

High-Speed Cyton CXP-4 transmits data of 13 subsurface images at various depths captured at 720 frames per second and 2116 DPI with no latency

WOBURN, MA, MAY 14, 2021 — In an effort to disguise his fingerprints, the prohibition-era gangster John Dillinger famously had doctors cut away the outer layer of his skin, the epidermis, and dip his fingertips into hydrochloric acid. Since then, criminals looking to evade capture have had their fingerprints sanded off, burned off with cigarettes, and have applied super glue so the ridges were not identifiable.

Fingerprints link people to their arrest records and outstanding warrants. Obliterating them seemingly provides a clean slate. However, a collection of skin layers, around 200 − 400 µm beneath the finger surface, is composed of live cells that is collectively called the “viable epidermis” or internal fingerprint. It essentially has the same topography as the finger surface. One of the most promising identification technologies for imaging below the surface of an external fingerprint is full-field optical coherent tomography (FF-OCT). While effective, FF-OCT can be expensive and cumbersome, despite its proven ability for biometric use, and is limited to large benchtop systems.

To overcome these limitations, researchers at the PSL Research University (Paris, France), the Polish Academy of Sciences (Warsaw, Poland), and the Norwegian University of Science (Gjøvik, Norway) developed a more compact, mobile and inexpensive FF-OCT that may lay grounds for enabling more widespread use of this technology. The newly designed system is comprised of an Adimec two-megapixel camera, a BitFlow Cyton-CXP4 CoaXPress four-lane frame grabber, an interferometer and a NIR light emitting diode. The system enables recording of 1.7 cm × 1.7 cm images of subsurface finger features, such as internal fingerprints and sweat ducts. A lightweight slab of plexiglass of 30 cm × 30 cm × 1 cm in size was used to cover the top of the system, against which a hand could be rested during the fingerprint imaging for a more stable acquisition.

LED illumination provided 900 mW of spatially incoherent light at 850 nm. Light was magnified 5X by lenses so that the divergence of the emerging light from the LED is decreased. This design helped to retain as much of the LED’s light as possible, which was necessary to operate the camera close to its saturation level and to ensure the homogeneous illumination of the entire sample area.

One of the challenges the researchers faced was running the camera at a maximum speed of 720 frames per second (fps), which together with the large number of pixels put a high demand on the data transfer requirements between the camera and the computer. To address this problem, the BitFlow Cyton-CXP4 frame grabber — capable of transferring the data at the maximum speed of 25 Gb/s — was installed. The sensor was controlled by an Adventech microcomputer with one PCIex16 slot dedicated to the BitFlow frame grabber. Thirteen FF-OCT images could be acquired at different depths by stepping a reference reflector every 50 µm between the acquisitions, resulting in images acquired in the range of 0 µm to 600 µm. Each was recorded in 570 milliseconds at 2116 dpi showing both the subsurface fingerprint and sweat ducts.

To demonstrate the accuracy of the FF-OCT, testing of 585 subjects and 6 unique fingers for each subject was conducted. Commercial-off-the-shelf fingerprint software from Neurotechnology was used, resulting in a Detection Error Trade-off and Receiver Operating Characteristics that showed a false rejection rate of 1.38% and the False Acceptance Rate of 0.1%. These results support the applicability of the new system for fingerprint imaging in real-life deployment.

Researchers found that the new FF-OCT was particularly useful when the surface of a finger is heavily damaged. The FF-OCT can still record a fingerprint image because of the remaining internal fingerprint, which is actually now easier to image because most of the scattering and absorbing epidermis layer is removed.

Researchers are continuing their work hoping to create even more compact and cost-effective FF-OCT designs, along with improved algorithms for better extraction of subsurface information.