I. Introduction
Technological advancements have always been one of the important key drivers for innovation
in healthcare. The evolution of devices for wearable and remote healthcare over the
last 2 decades is just one clear example. Indeed, fueled by innovations in sensor
technology, material technology and micro- and nanoelectronics, various types of medical
sensing technology previously only possible in a hospital environment are now readily
available in wireless wearable devices. Wearables are now available as medical devices,
but it is equally interesting to see how a wide array of consumer products are now
available that allow an individual to measure vital signs like heart rate, respiration
rate, blood oxygenation, etc. in a semi-continuous manner.
To a large extent, the development of wearable technology is driven by medical challenges
in the cardio-vascular space. Indeed, most wearable devices focus on measuring and
interpreting data related to the autonomous nervous system, the cardiovascular system,
the pulmonary system and to a lesser degree the central nervous system. Existing personal/wearables
are mostly unable to measure the gastro-intestinal system. Unfortunately, globally
metabolic health is dropping at an alarming rate. Metabolic disorders occur when the
normal chemical reactions are disrupted, resulting in either too many or too little
of critical substances. While some are genetically inherited, others are developed
when critical organs, like liver, pancreas or bowel are diseased. But also, lifestyle,
behavior and nutrition are critical for our metabolic health. For a lot of digestive
disorders, the exact pathology and underlying disease mechanisms remain not fully
understood. In an effort to address these challenges, various research groups around
the world are betting on advanced ingestible technology that could allow unique and
unprecedented insight into the human metabolic system[1]. It is clear that one of the areas where we might see a similar explosive growth,
is ingestible sensor, as shown in Fig. 1(a), focused on demystifying our gastrointestinal inner workings.
Fig. 1(b) shows the block diagram of the ingestible sensor. The radio, which includes a matching
network, transmitter (TX), receiver (RX), and Local oscillator, enables communication
with the outside world. The signal detected by the sensor is amplified, filtered,
and converted into digital data through the sensor Analog Front End (AFE). This data
is stored in memory and periodically transmitted to an external RX via TX. Control
signals from the outside are received through the RX and used to control the Micro
Controller Unit (MCU). The MCU then controls the ingestible sensor. The matching network
ensures impedance matching between the radio and the antenna, and the local oscillator
generates the carrier frequency. The Power Management Unit (PMU) manages the required
power for each block. The Advanced High-performance Bus (AHB) enables data transfer
between blocks.
The biggest challenges for the envisaged applications are the size constraints and
the tissue depth. There will be innovations needed to push the device size down, while
ensuing still reliable operation for a free-floating ingestible sensor. To do so,
ultra-low-power wireless communication with a small footprint, battery and antenna
size is required.
In this paper, wireless system miniaturization solutions for ingestible sensors are
discussed. Some of the technical challenges in miniaturization are introduced. State-of-the-art
miniaturized wireless systems are reviewed, and future trends are forecasted.
This paper is organized as follows. State of the art ingestible sensors and its wireless
systems are introduced in Section II. Section III discusses miniaturization techniques
for wireless systems. Section IV suggests the future trends in wireless systems in
ingestible sensors. Finally, our conclusions are drawn in Section V.
Fig. 1. (a) Ingestible sensor that connects the inside of the GI tract to the outside
world; (b) its block diagram.
II. Wireless Systems in Ingestible Sensors
Advances in integrated circuits, RF wireless communication, power management and sourcing
lead to a quantum leap in the evolution of ingestible sensors, enabling miniaturization
and low power consumption while wireless connectivity. Capsule-shaped, ingestible
camera pills[2], are clear examples. In addition to image sensing, ingestible sensors detecting biomarkers
are being actively researched. These existing sensors are summarized in Table 1.
Table 1. Ingestible sensors with different sensing modalities
|
Sensing Modality
|
Application
|
Wireless Data Comm. (frequency/data rate)
|
Power Consumption
|
PillCam [2]
|
Camera
|
Endoscope
|
434 MHz / 2~10 Mbps
|
NA
|
ISSCC, 18 [11]
|
Camera
|
Endoscope
|
100~180 MHz / 80 Mbps
|
33.8 mW
|
JSSC, 13 [12]
|
Camera
|
Endoscope
|
915 MHz / 20 Mbps
|
10.78 mW
|
Gastroenterol, 11 [13]
|
Pressure, pH, Temperature
|
Measuring the transit times through GI track
|
434 MHz / NA
|
NA
|
JSSC, 18 [14]
|
Ion
|
Evaluation of electrolyte balance in the GI tract
|
2.4 GHz / 100 bps
|
5.5 nW
|
Nature, 23 [16]
|
Fluorescence
|
Inflammatory bowel disease detection
|
915 MHz / NA
|
NA
|
TBCAS, 23 [18]
|
Fluorescence
|
Biomolecular detection
|
915/400 MHz / 10 Mbps
|
0.66 mW
|
Nature, 18 [20]
|
Gas
|
Understanding areas of the intestine and the fermentation patterns of intestinal microorganisms
|
433 MHz / NA
|
NA
|
After the PillCam[2]’s successful launch, numerous companies provide ingestible camera pills for endoscopic.
These capsule endoscopies travel through the gastrointestinal tract, transmitting
the imaging data to an external RX wirelessly. Such capsule shaped wireless optical
sensors eliminate the need for sedation and invasive endoscopic insertion and enables
the imaging of the small intestine, which is difficult to access with conventional
endoscopic endoscopies.
The gastrointestinal (GI) monitoring with images has significant advantages, but it
presents notable limitations. Firstly, high-resolution image sensing, requiring data
transfer rates of tens of Mb/s is demanded for reliable accurate diagnostics[11],[12]. It is problematic to secure the communication range through the tissue from capsule
to the base station. An additional device as a ``repeater'' attached on the skin,
including external transceivers and antenna array, is required[7],[9],[11],[12].
Secondly, the limited scope of diseases detected by visual inspection and the inability
to assess nutritional status make it unsuitable for a comprehensive diagnosis of gastrointestinal
health. Third, the capsule endoscope has a short battery life within 12 hours[2], due to the use of high data rates and LEDs, too short to diagnose gut motility,
monitoring of digest, chronical diseases. Due to the limitations, the camera pill
is hard to provide multi-modal sensing, required for the comprehensive assessment
of the digestive system where various and complex physiological processes occur.
As an alternative, biomarker based ingestible sensors that monitor the GI tract via
electrochemical sensors have been recently introduced[13]. Unlike optical/image sensors, the biomarker sensors can mitigate the data rate (<
10 Mb/s), while lowering the power consumption (< 10 mW). Therefore, the biomarker
based ingestible sensors can provide more reliable wireless communication, e. g.,
longer communication range, direct connection to off-the-shelf devices without the
need for repeaters. This can also increase the communication link budget, relaxing
the power budget to the wireless module or minimizing the antenna size to further
optimize the whole system form factor.
Biomarkers such as body temperature, pressure, pH, ion, oxidation-related, biomolecular,
gases are measurable indicators of a biological condition or disease condition. A
biomarker sensing based ingestible sensor in[13] detects the changes of pH, temperature, and pressure via sensors, when traveling
through the GI tract, for measuring the transit times through the stomach, small intestine,
colon, and the entire digestive. This provides a comprehensive profile of gut motility,
aiding in the diagnosis of conditions such as gastroparesis and chronic idiopathic
constipation.
Ion sensors utilizing Ion-Selective Electrodes (ISE)[14],[15] sense dietary minerals such as Na+, Ca2+, Mg2+, and K+ in GI fluid, enabling the
evaluation of electrolyte balance in the GI tract, as illustrated in Fig. 2(a).
Oxidation-related biomarkers are used for diagnosing intestinal diseases such as Inflammatory
Bowel Disease (IBD)[16],[17]. The biosensor bacteria[16] or the chemiluminescent paper-based sensor[17] realized the oxidation-related biomarkers. They express luminescence when exposed
to certain inflammation-related markers, and this light is converted into electrical
signal by photodetector, as shown Fig. 2(b).
Fig. 2(c) shows the biomolecular sensor that utilizes a 15-pixel CMOS fluorescence sensor array
for biomolecular sensing[18]. Each pixel in the sensor array is to react to specific target biomolecules such
as DNA or proteins. The sensor array is capable of sensing various biomarkers simultaneously
in GI fluid and thus collects diverse responses of the gut from new drugs or treatments,
accelerating medical advancements.
Gas sensing has been also introduced to monitor the GI[20], identifying gases such as oxygen (O$_{2}$), hydrogen (H$_{2}$), and carbon dioxide
(CO$_{2}$) that serve as indicators of various biological and chemical processes in
the gut, as depicted in Fig. 2(d). This sensing method allows to distinguish areas of the intestine and understand
the fermentation patterns of intestinal microorganisms through the thermal conductivity
and resistance changes on the sensor surface in response to different gases. Apart
from sensors for detecting biomarkers, there are ingestible sensors that react to
specific drugs to verify medication intake[23].
Fig. 2. Biomarker based sensors: (a) Ion Sensing [15]; (b) Oxidation [17]; (c) Fluorescence Sensor Array [18]; (d) Gas sensing [20].
The required data transfer rate for ingestible sensors aforementioned is dependent
on the type of sensors. This ranges typically from tens of b/s (temperature sensor,
blood sensor, etc.)[24] to several Mb/s for more complex sensing applications such as oxidation (1 M/s,[17]), Gas (1 Mb/s,[20],[22]), Ion sensing (4 Mb/s,[14]), and biomolecular sensor (7 Mb/s,[18]). Furthermore, high-resolution image sensing requires even higher data rates. When
the data rate increases, the data communication range decreases, according to Shannon’s
theorem[25], thus requiring more power to maintain the communication range. For communication
of ingestible sensors, data rate, distance, power consumption must be considered depending
on the sensor type, purpose, environment.
These requirements present other considerations such as selecting frequency band,
standard, tissue loss, antenna's radiation pattern change, co-existence with other
sensors, miniaturization.
Selecting the frequency band and standard protocols for communication of ingestible
sensors is crucial. Commonly used Industrial, Scientific, and Medical (ISM) bands,
e. g., 2.4 GHz, accommodate several standards such as Wi-Fi[26], Bluetooth[27], Bluetooth Low Energy (BLE)[28], and Zigbee[29]. Bluetooth and Wi-Fi are attractive standards for the ingestible sensor to be compatible
with off-the-shelf IoT devices, but their relatively high power consumption (100 mW
~ 800 mW) would be problematic[24],[30]. BLE designed for intermittent communication, i. e., duty-cycling, lowers power consum-ption
but has limitation to continuous data collection and real-time monitoring. Zigbee,
reported as a lower power consuming standard than Bluetooth[31], is limited for sensors that requires high data rate (~ Mb/s), as it supports only
up to 250 kb/s.
The 2.4 GHz ISM band has the advantage of using diverse standard protocols that is
highly compatible with a wide range of off-the-shelf mobile devices, but high susceptibility
to interference and weakness to security due to its high congestion. In addition,
the 2.4 GHz ISM band has relative high tissue loss compared to other lower frequency
bands. For example, with the tissue thickness of 7.8 cm, the loss is about 90 dB at
2.4 GHz, whereas it is about 40 dB at 400 MHz[32].
Therefore, the lower frequency band, the 402-405 MHz frequency range, known as Medical
Implant Communi-cations System (MICS) band[33] and its standard, IEEE 802.15.6[34], appear strategic for communication of ingestible sensors[35].
Although many standards that are suitable for the ingestible sensors have been released
and adopted, further optimization to the communication with more flexibility have
been considered. Thus, proprietary protocols with alternative bands[10],[36],[39],[40] have been proposed for ingestible sensors. Owing to their flexibility, it can further
optimize the communication performance and power efficiency. However, they would force
to use the additional devices to repeat the communication to off-the-shelf devices,
for their compatibility.
Fig. 3 shows the trends in data rate versus communi-cation range of State of The Art (SoTA)
wireless modules for ingestible sensors. As aforementioned, higher data rates result
in shorter communication ranges. Technological advances are being made to push beyond
the 10 Mb/s•m limit.
Fig. 3. Trends in data rate versus communication range of state-of-the-art wireless
modules for ingestible sensors.
Implementation of wireless communication for ingestible sensors has several challenges.
Monitoring the GI tract using ingestible sensor, which is located deep within the
body, results in significant tissue loss during communication. Additionally, the antenna's
radiation pattern changes due to body movements and digestive activities, and the
antenna's impedance varies with the GI track status, e. g., empty or full stomach.
Other wearable or implantable devices located close to the ingestible sensor can interfere
with communication with the external hub.
Another challenge is the size of the ingestible sensor. Miniaturization can increase
medication compliance and allow easy passage through narrow regions of the GI tract.
Millimeter-scale sensors enable long-term monitoring by attaching the sensor at specific
points through gastroscope.
However, as the module size decreases, the capacity of transmit power would be limited,
e. g., lower antenna efficiency due to smaller size (to be discussed in Sec. III),
which leads to limit to attain sufficient Signal-to-Noise Ratio (SNR) for the desired
data rate[25]:
where $C,\,\,S,\,\,N_{0},$ and BW are channel capacity, signal power, noise spectral
density, and bandwidth, respectively. Fig. 4 shows the trends in wireless module area versus data rate of SoTA wireless modules
for ingestible sensors.
Fig. 4. Trends in wireless module area versus data rate of state-of-the-art wireless
modules for ingestible sensors.
Moreover, miniaturization limits battery capacity, causing difficulties in long term
monitoring. For longer sensing while pursuing the miniaturization, energy efficiency
operation is required. It is also beneficial to avoid tissue heating due to the high-power
consumption of ingestible sensor.
To address these issues, ultra-low-power circuit technologies are necessary. Therefore,
miniaturized wireless communication module that can also consume low power is essential
for ingestible sensors.
III. Solutions for Wireless System Miniaturization
The wireless system plays a key role in ingestible sensors, transferring the sensor
data. To secure communication with the outside world, a link budget analysis shown
in Fig. 5 should be conducted, considering the Effective Isotropic Radiated Power (EIRP) of
the sensor TX and the path loss (e. g., tissue loss, free-space loss, multipath loss)
and ensuring that the received power is sufficiently higher than sensitivity which
is the minimum signal strength that can receive the signal. Increasing transmit power
or reducing RX sensitivity can increase the link budget but require more power consumption
that can reduce the sensor lifetime and occur the tissue heating that would deteriorate
the biocompatibility of the sensor.
Fig. 5. Link budget analysis of wireless transceiver.
Increasing the antenna efficiency can also secure the link budget, and thus the communication
range. However, it is also unavoidable to increase the antenna size[41]:
where $\eta ,\,\,\alpha ,$ and $L_{g}$ represent antenna efficiency, attenuation constant
and the antenna length, respectively.
To increase the antenna efficiency while maintaining the size, the frequency band
of antenna has to be increased as well[42]:
where $f_{r},\,\,c,\,\,\lambda _{g},\,\,\varepsilon _{eff},$ and $\varepsilon _{r}$
are the resonance frequency, speed of light, wavelength, effective permittivity and
combined relative dielectric constant of the substrate, respectively. However, at
the higher frequency band, the path loss also increases[32], which counteracts to link budgeting, and is even more problematic for ingestible
sensor communications where EM absorption by tissues is significant in the path loss.
1. Antenna Miniaturization
To miniaturize the antenna while securing the communication distance at the target
frequency band, several antenna design techniques have been introduced[42]. To reduce the size of the antenna with maintaining the resonance frequency, miniaturization
techniques are adopted to antenna design. Fig. 6(a) shows a shorting pin[42]. It is widely used for the miniaturization of implantable antennas. This connects
the patch to the ground plane and acts as the ground plane of a monopole antenna that
allows to double the electrical size of an antenna. Thus, by adding the shorting pin,
the antenna has a similar resonant frequency of an antenna that has twice size without
the shorting pin[42]. The ground plane with slots can provide an additional miniaturization. Since slots
added to the ground plane increase capacitance, the resonance frequency is lowered[45].
Miniaturization is also realized through the pattern of the antenna patch. One of
several patterns for miniaturization is a meander line structure, as shown in Fig. 6(b). This structure lengthens the current flow path, so the physical size is decreased
with maintaining the electrical length, enabling the operation at the desired frequency.
Furthermore, the parasitic capacitance occurring in the gaps between meandering lines
makes the resonant frequency move to the lower end of the spectrum[42]. These antenna miniaturization techniques allow to ensure the same electromagnetic
performance even with smaller sizes at the same frequency.
Fig. 6. (a) Antenna structure with shorting pin; (b) Meander line antenna.
2. Antenna Interface Integration
Pattern antennas discussed in Sec. III have narrow-band characteristic and thus their
impedance is prone to process variations. Moreover, the antenna impedance varies sensitively
depending on the surrounding state, e. g., diet conditions and locations. to adaptively
respond to the antenna impedance variation, a Tunable Matching Network (TMN) and matching
detector is crucial to ingestible sensors, rather than re-tuning the antenna to a
right impedance.
Since the TMN typically comprises passive devices, e. g., inductors, capacitors, it
is typically configured with external device[47] or a separate TMN chip[48] as it requires large impedances for wide coverage especially at the low frequency.
The separate TMN chip is typically implemented in the Silicon-On-Insulator (SOI) process
with low-loss substrate[48], or with multiple inductors and capacitor banks[49],[50] which introduces extra loss and silicon area.
To interface from antenna to a transceiver front-end, an antenna switch is typically
employed to connect to both TX’s and RX’s TMNs separately, as shown in Fig. 7, but this approach introduces additional loss and area. Instead, the TX/RX shared
TMNs directly interface to the antenna can avoid such antenna switch[51] and further optimize the TMN size. The shared TX/RX Tunable Matching Network (TMN)
shares an inductor and uses capacitors to configure the matching network for TX mode
and RX mode. Therefore, the TMN is optimized for a specific frequency. If TX and RX
use different frequency bands or require a wide bandwidth, the use of the shared TX/RX
TMN may be limited. Furthermore, this structure can lead to trade-off in optimizing
the performance of each mode. In TX mode, achieving maximum efficiency requires a
specific impedance matching condition, but this condition can increase the noise figure
in RX mode. Conversely, optimizing the matching for minimal noise in RX mode can result
in reduced efficiency in TX mode [54].
Fig. 7. Transceiver with Antenna switch and tunable matching network.
On top of the TMN, the impedance matching detection and correction techniques[48] have also to be adopted to the ingestible sensors, in order for the wireless communication
to adapt to its various surrounding state in the body. Traditional techniques[48],[55],[56] use directional coupler[55],[56], off-chip tuner[48] to do the impedance detection, which is bulky at the RF frequencies, e. g., 2.4 GHz
and lossy. Recent works[48],[50] have demonstrated some techniques to have automatic impedance matching tuning capability
to compensate the antenna impedance variations.[50] uses a two-point amplitude-only detection to extract the impedance mismatch information,
and further tuned through an on-chip three stage LC network. It finds the optimum
tuning setting through an exhaustive search which can be time consuming, and the work
has a relatively small tuning range up to |${\Gamma}$| = 0.3. In[48], impedance mismatch detection is done by a polar detection which detects both the
amplitude and phase information of the impedance seen from the Power Amplitude (PA)
output. Based on the polar information, an off-chip tuner with SOI switch is tuned
iteratively to reduce the impedance mismatch. With the extra phase information, a
successive approximation is used for the optimization, and leads to a faster calibration.
In addition, the off-chip SOI tuner covers a large tuning range, up to VSWR of 6 (i.
e., |${\Gamma}$| = 0.714). Since[48] is targeted for cellular application, the power consumption of the detection circuit
is relatively high (30 mW). In addition, the separate SOI tuner chip increases the
system cost and dimension.
With simpler design approaches, fully integrated on-chip antenna impedance detectors[32],[48] have been introduced. The detection technique in[49] re-uses the balun transformer necessary for the differential PA as a hybrid transformer.
As shown in Fig. 8, the hybrid transformer can provide a signal due to impedance mismatch between the
real load impedance and an on-chip reference load. With the complex detection by extracting
In-phase (I) and Quadrature (Q) information, exact antenna impedance mismatch from
the reference load can be detected. Amplitude-only impedance detections[32],[57],[58] can simplify the detection circuity and reduce the power consumption further but
it increases the detection time and requires longer settling time for the ingestible
sensors to adapt to its surroundings for reliable communication.
Fig. 8. Impedance matching detection with hybrid transformer [48].
3. Crystal-less Communication
As discussed in Section II, the required data rate for ingestible sensors are relatively
low (~10 Mb/s). Moreover, the communication duration has no need to be long, $\mu
s$ to $ms$ scale[32]. Therefore, to maximize the energy efficiency of communication, the wireless module
can be turned off most of time when there is no communication event, so called ``duty-cycling''[32], as illustrated in Fig. 9.
Fig. 9. Power scenario of duty cycled communication.
In the duty-cycling communication, the wireless module is mostly in the sleeping time
while the wakeup timer which wakes up the module in the communication is continuously
on. Therefore, the power consumption of the wakeup timer can be dominant in the energy
efficiency of the communication. Moreover, due to the timing inaccuracy of the timer,
the wireless module should wake up sufficiently in advance of the time when the communication
starts (guard time in Fig. 9). Hence, time accuracy of the wakeup timer is also crucial to achieve high energy
efficiency of the communication by reducing the operation time of the module.
For this reason, a Crystal Oscillator (XO) has been widely chosen as a wakeup timer,
due to its high frequency accuracy and reliability. However, its millimeter-scale
size is too bulky to minimize a wireless system for ingestible sensors. To replace
the XO, RC based timers[59] have been introduced as they can be fully integrated on a chip as depicted in Fig. 10 Since the short-term time inaccuracy (jitter) of the timer is averaged out in long-term,
in the duty- cycling scenarios, such on-chip timer can provide a decent long- term
frequency accuracy. Moreover, RC-based timers can also operate at low power as its
frequency is mainly up to the charging-discharging operation, dominant to the RC constant.
However, RC-based timers are susceptible to PVT variation; in particular, their sensitivity
to temperature (temperature coefficient) is one of the challenging design constraints
to overcome[60]. However, since the body temperature remains relatively constant, they are promising
as a wakeup timer for ingestible sensors.
Fig. 10. Relaxation oscillator [59].
Frequency stability is another critical constraint to establish the wireless communication,
which is guaranteed on the same frequency channel. For example, frequency stabilities
in BLE standards[27] and MICS band[33] are defined as ${\pm}$41 and ${\pm}$100 ppm, respectively. To meet such accuracy,
the XO has been also the inevitable option for a frequency reference[63]. To minimize the XO size, a Film Bulk Acoustic Resonator (FBAR) is utilized by replacing
the quartz crystal[66]. The FBAR is sub-millimeter sized and secures the high frequency stability (<60 ppm).
However, the FBAR requires the additional processing steps and costs for integration.
Alternatively, a network-based frequency synchronization methods have been presented[32],[67]]. Instead of external devices, e. g., crystals or FBARs, the network-based frequency
synchronization directly utilizes the received signal as a frequency reference.
In general, the network-based frequency synchro-nization technique has two challenges.
First, the receiver needs recognize the received signal properly and extract the carrier
frequency. Second, the frequency calibration would be susceptible to the carrier frequency
drift, as it can be occurred only when the signal is received.
As shown in Fig. 11, tracking loop-based receivers (RXs) extract the carrier frequency from the down-converted
Intermediated Frequency (IF) signals and adjust either frequency or phase of the Local
Oscillator (LO) accordingly[32],[67],[68]. This behavior can avoid the extra circuitries for frequency or phase detector by
detecting frequency or phase from input signal through the mixer. However, this requires
the RX to first define the channel to listen on, and thus its LO frequency setting
accordingly[32]. To reduce the energy to initialize the LO frequency, a Phase-Locked Loop (PLL) based
carrier frequency extraction with a dedicated reference clock recovery receiver[69] can be utilized to generate a reference clock from the received signal.
Fig. 11. Tracking loop-based RX.