Engineering and Technology
Digital circuits and neural networks based on acid-base chemistry implemented by robotic fluid handling
A. A. Agiza, K. Oakley, et al.
This groundbreaking research conducted by Ahmed A. Agiza, Kady Oakley, Jacob K. Rosenstein, Brenda M. Rubenstein, Eunsuk Kim, Marc Riedel, and Sherief Reda explores an innovative fusion of digital circuits and neural networks with acid-base chemistry, empowering new forms of information processing through a robotic fluid handler. Discover how acids and bases can revolutionize data encoding and circuitry.
~3 min • Beginner • English
Introduction
The study explores acid-base chemistry as a simple, robust medium for molecular computation, motivated by interest in alternatives to semiconductor devices for power efficiency, affordability, biological compatibility, and operation in environments unsuitable for electronics. Molecular computing approaches, including reaction-diffusion systems and DNA computing, have shown promise but often require complex spatio-temporal control, are temperature-sensitive, or demand numerous custom reagents and long computation times. The authors propose using strong acid/base reactions to realize universal computation via the majority function that emerges when mixing acids and bases, paired with a negation implemented through a dual-rail (complementary) encoding. This dual-rail scheme represents values as (Acid, Base) for TRUE and (Base, Acid) for FALSE, enabling inversion by swapping rails and naturally yielding both outputs and their complements. The work aims to build primitive logic blocks, digital circuits, and neural network classifiers using this representation, executed by a robotic acoustic liquid handler.
Literature Review
Prior molecular computation strategies include reaction-diffusion and chemical oscillators such as the Belousov–Zhabotinsky reaction, which can perform classification but rely on complex, sensitive spatio-temporal dynamics. Hybrid methods have used electronic actuators to initiate reactions. Autocatalytic reactions have been modeled as neural network activation functions to achieve image recognition with catalyst concentrations encoding inputs. DNA strand-displacement has been the most studied platform for molecular logic and arithmetic, offering programmability but facing temperature sensitivity, long reaction times, and the need for large numbers of custom reagents and optimizations. Complementary or dual-rail concepts have been employed in other chemical computing approaches (e.g., seesaw gates and complementary solution encodings). The present work extends the complementary information concept to ubiquitous acid/base chemistry, leveraging majority-inverter completeness to realize universal logic with simple mixtures.
Methodology
Chemical basis and majority operation: The system mixes strong acids (HX) and bases (YOH) in water with reactions: (1) HX + H2O → H3O+ + X−, (2) YOH → Y+ + OH−, (3) H3O+ + OH− → 2H2O. Neutralization couples acid and base to leave the more abundant species, naturally implementing a majority function over mixed droplets of equal volume and concentration. Inversion is achieved by swapping the complementary rails in a dual-rail encoding rather than a direct chemical inversion.
Dual-rail encoding and discretization: Each bit is encoded as a pair of complementary solutions: TRUE (+1) as (Acid, Base) and FALSE (−1) as (Base, Acid). A neutral pH 7 pair denotes the midpoint (0) when applicable. Multi-level encoding uses controlled dilution to represent discretized values; for a 3-bit scheme, values are in {−4, −3, −2, −1, 1, 2, 3, 4}, with positive values as (Acid, Base) and negative as (Base, Acid). To prepare a given encoded value, water is added according to: Added water volume = (initial volume × maximum encoded value) – initial volume / encoded value. An acoustic liquid handler (Echo 550, Beckman Coulter) dispenses acids, bases, and water to destination well plates; the handler routes liquids akin to wires in electronic circuits. For each value, the handler dispenses an acid then a base (for positive) or base then acid (for negative), and dispenses water to achieve the required dilution per the discretization.
Digital logic modeling: Majority and inversion (via rail swapping) form a functionally complete set. Primitive AND, OR, INV, NAND, and NOR gates are constructed using acid-base blocks with dual-rail encoding. To avoid neutral outputs, gates are designed with an odd number of inputs; two-input gates use a constant bias input (acid or base) as the third input. A digital 2-bit decoder is mapped to an equivalent acid-base circuit and executed with the liquid handler, using acid/base stocks for inputs, their complements, and biases. Cascade limitations are addressed conceptually by refreshing intermediate results with fresh acid/base stocks between logic stages.
Neural network classifier implementation: Images (e.g., MNIST digits) are flattened and encoded per pixel into dual-rail acid/base wells: white pixels as (Acid, Base), black pixels as (Base, Acid). A binary-weight neural network (weights in {+1, −1}) is realized by either direct transfer (multiply by +1) or rail swap (multiply by −1). For each neuron (class), the liquid handler mixes the weighted contributions, producing acidic/basic outputs corresponding to positive/negative neuron activations. A pH indicator provides a nonlinear threshold-like activation readout: (Acid, Base) indicates positive; (Base, Acid) indicates negative.
Materials and volumes: HCl (36.5–38.0%) and NaOH (≥97%) were prepared at 100 mM in deionized water. Bromothymol blue (0.04%) served as the pH indicator. Stock 384‑well polypropylene plates (min 10 µL, max 60 µL) were manually prepared with rows of acid, base, and water (50 µL per well). The acoustic liquid handler (max 10% volume deviation) encodes images into 1536‑well low-dead-volume plates (min 1 µL, max 4 µL). For each pixel-weight condition: 2.4 µL of base followed by 2.4 µL of acid (or vice versa) are transferred depending on the sign alignment of pixel and weight. For multi-level values, water is dispensed per the discretization rule. Neuron outputs are pooled by transferring 200 nL from each pixel’s pH rail wells into an output well for the neuron’s pH rail, and similarly for the complementary rail.
Data preparation and training: MNIST digits 0, 1, and 2 were used (train/validation counts: 5923/980 for 0; 6742/1135 for 1; 5958/1032 for 2). Images were evaluated at scales 8×8, 12×12, 16×16, and 28×28. Two variants: (i) binary pixels mapped to ±1 using a threshold of 128, encoded as (Base, Acid) for <128 and (Acid, Base) for ≥128; (ii) 3-bit discretization to values {−4, −3, −2, −1, 1, 2, 3, 4} over specified intensity ranges. Networks used weight binarization to constrain weights to ±1, trained with learning rate 0.001 for 30 epochs; Binary Cross Entropy loss was used, and a sigmoid activation was used during training for the single classification layer. Models for inference used softmax for multi-class outputs. Trained weights were exported to guide the liquid-handling protocol.
pH simulation and validation: An automated pH simulator computes [H+] and resultant pH from liquid-handling transfer programs (directly ingesting the handler’s programming files) to predict outcomes and compare against digital model classifications. Experimental runs used a subset of validation images across sizes and class counts. Run-time for encoding and pooling scales with image size and number of classes (e.g., for 16×16 2-class binary: 52 min encoding, 5 min pooling; for 28×28 2-class binary: 158 min encoding, 16 min pooling). Readout used 5 µL of Bromothymol blue in neuron output wells: yellow indicates acid, blue indicates base; dual-rail colors (Acid/Yellow, Base/Blue) represent +1; (Base/Blue, Acid/Yellow) represent −1.
Key Findings
- The dual-rail acid-base system implements universal computation using majority and inversion (via rail swapping), enabling construction of logic primitives (AND, OR, INV, NAND, NOR) and a 2-bit decoder circuit executed by an acoustic liquid handler.
- Neural network inference was realized chemically using binary weights mapped to direct transfer (+1) or rail swap (−1). The pH indicator functions as a nonlinear activation readout.
- Experimental validation on selected MNIST images produced outputs matching the in-silico classifier, with some experiments showing perfect agreement in predicted labels; when color-based sign ambiguity occurred, measured pH magnitudes still matched in-silico predicted labels.
- Simulation-to-digital model matching for full validation sets was high: for 2-class binary image classifiers, average match 99.61% (digital model accuracy ≈95.72%); for 2-class 3-bit classifiers, average match 98.94% (digital model accuracy ≈95.27%); for 3-class binary classifiers, average match 98.35% (digital model accuracy ≈95.90%).
- Practical execution times (Echo 550) scale with image size and class count; e.g., 16×16 2-class binary: 52 min encoding + 5 min pooling; 28×28 2-class binary: 158 min encoding + 16 min pooling; 16×16 3-class binary: 78 min encoding + 8 min pooling.
Discussion
The findings demonstrate that ubiquitous strong acid/base chemistry can implement functionally complete logic using majority behavior and negation via dual-rail encoding, enabling both digital logic circuits and neural network inference without complex spatio-temporal reaction control. The approach is rapid (fast acid-base kinetics), inexpensive, and compatible with simple lab instrumentation. It offers a potential interface to molecular data storage systems that are pH-responsive. The dual-rail method elegantly resolves the challenge of chemical negation and yields complementary outputs naturally. For logic circuits, odd-input majority with bias avoids neutral outputs; however, cascaded stages face dilution toward neutrality. This can be mitigated by refreshing intermediate states or developing automated pH-level restoration (e.g., pH-sensitive hydrogel valves). For neural networks, the chemical implementation closely tracks digital inference, with high simulation agreement; minor discrepancies stem from color-based threshold readout rather than the underlying chemistry. The concept generalizes beyond acids/bases: complementary rails could encode other relationships (e.g., scalar multiples or nonlinear transforms), and incorporating buffers or weaker acids/bases could create nonlinear activation-like behaviors to support deeper networks. Logic optimization could further reduce stages and mitigate dilution, and related transforms (e.g., Hadamard) align well with the operations demonstrated.
Conclusion
This work introduces a practical chemical computing framework using dual-rail acid-base encoding to realize universal logic (via majority and inversion) and to implement neural network inference with binary weights. The authors built primitive gates and a decoder circuit and demonstrated MNIST digit classification, showing near-perfect agreement between chemical simulations and digital models and experimental matches on selected samples. The methodology requires only common reagents, an acoustic liquid handler, and a pH indicator, enabling rapid, low-cost computation and potential integration with pH-responsive molecular storage. Future directions include: (i) introducing weaker acids/bases and buffer systems to realize nonlinear, cascadable activation functions for multi-layer networks; (ii) developing automated pH restoration mechanisms (e.g., pH-sensitive hydrogels) for deep logic cascades; (iii) applying logic optimization to minimize stages and dilution; and (iv) extending complementary-rail representations to other computational chemistries and binary transforms such as the Hadamard transform.
Limitations
- Signal degradation/dilution across cascaded logic stages due to neutralization drives outputs toward neutrality, necessitating refresh with fresh acid/base stocks or future restoration mechanisms for long cascades.
- Readout relies on colorimetric pH indicators with threshold behavior, which can introduce ambiguities when both outputs share the same sign by color; accurate outcomes may require direct pH measurement for tie-breaking.
- Inversion is not a single-rail chemical operation; it is achieved via dual-rail swapping, which requires maintaining complementary rails and careful liquid handling.
- Current demonstrations focus on inference with pre-trained binary-weight networks; training is performed digitally and then mapped to chemistry.
- Execution times scale with image size and class count and are bounded by liquid-handling throughput and volume precision tolerances (up to 10% deviation).
Related Publications
Explore these studies to deepen your understanding of the subject.

