<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">NEJSDS</journal-id>
<journal-title-group><journal-title>The New England Journal of Statistics in Data Science</journal-title></journal-title-group>
<issn pub-type="ppub">2693-7166</issn><issn-l>2693-7166</issn-l>
<publisher>
<publisher-name>New England Statistical Society</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">NEJSDS71</article-id>
<article-id pub-id-type="doi">10.51387/24-NEJSDS71</article-id>
<article-categories><subj-group subj-group-type="area">
<subject>Spatial and Environmental Statistics</subject></subj-group><subj-group subj-group-type="heading">
<subject>Methodology Article</subject></subj-group></article-categories>
<title-group>
<article-title>Irrigation Zone Delineation by Coupling Neural Networks with Spatial Statistics</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-4654-9827</contrib-id>
<name><surname>Heaton</surname><given-names>Matthew J.</given-names></name><email xlink:href="mailto:mheaton@stat.byu.edu">mheaton@stat.byu.edu</email><xref ref-type="aff" rid="j_nejsds71_aff_001"/><xref ref-type="corresp" rid="cor1">∗</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Teuscher</surname><given-names>David</given-names></name><email xlink:href="mailto:dteuscher.37.12@gmail.com">dteuscher.37.12@gmail.com</email><xref ref-type="aff" rid="j_nejsds71_aff_002"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Hansen</surname><given-names>Neil C.</given-names></name><email xlink:href="mailto:neil_hansen@byu.edu">neil_hansen@byu.edu</email><xref ref-type="aff" rid="j_nejsds71_aff_003"/>
</contrib>
<aff id="j_nejsds71_aff_001">Department of Statistics, <institution>Brigham Young University</institution>, 2152 WVB, Provo, UT 84602, <country>USA</country>. E-mail address: <email xlink:href="mailto:mheaton@stat.byu.edu">mheaton@stat.byu.edu</email></aff>
<aff id="j_nejsds71_aff_002">Department of Statistics, <institution>Brigham Young University</institution>, 2152 WVB, Provo, UT 84602, <country>USA</country>. E-mail address: <email xlink:href="mailto:dteuscher.37.12@gmail.com">dteuscher.37.12@gmail.com</email></aff>
<aff id="j_nejsds71_aff_003">Department of Plant &amp; Wildlife Sciences, <institution>Brigham Young University</institution>, 5108 LSB, Provo, UT 84602, <country>USA</country>. E-mail address: <email xlink:href="mailto:neil_hansen@byu.edu">neil_hansen@byu.edu</email></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2025</year></pub-date><pub-date pub-type="epub"><day>31</day><month>10</month><year>2024</year></pub-date><volume>3</volume><issue>1</issue><fpage>82</fpage><lpage>93</lpage><history><date date-type="accepted"><day>21</day><month>8</month><year>2024</year></date></history>
<permissions><copyright-statement>© 2025 New England Statistical Society</copyright-statement><copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>Variable rate irrigation (VRI) seeks to increase the efficiency of irrigation by spatially adjusting water output within an agricultural field. Central to the success of VRI technology is establishing homogeneous irrigation zones. In this research, we propose a fusion of statistical modeling and deep learning by using artificial neural networks to map irrigation zones from simple-to-measure predictors. We further couple our neural network model with spatial correlation to capture smooth variations in the irrigation zones. We demonstrate the effectiveness of our model to define irrigation zones for a farm of winter wheat crop in Rexburg, Idaho.</p>
</abstract>
<kwd-group>
<label>Keywords and phrases</label>
<kwd>Ordered multinomial</kwd>
<kwd>Precision agriculture</kwd>
<kwd>Bayesian</kwd>
</kwd-group>
<funding-group><funding-statement>This work was partially funded by the US-Israel Binational Agricultural Research and Development Fund grant IS-5218-19.</funding-statement></funding-group>
</article-meta>
</front>
<body>
<sec id="j_nejsds71_s_001">
<label>1</label>
<title>Introduction</title>
<sec id="j_nejsds71_s_002">
<label>1.1</label>
<title>Problem Background and Data</title>
<p>The management of agricultural fields (i.e. farming) in the era of data science has evolved to use spatial mapping, remote sensing, soil and terrain measurements, weather measurements and other data sources to improve the quantity and quality of crops [<xref ref-type="bibr" rid="j_nejsds71_ref_021">21</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_009">9</xref>]. The management of agricultural fields using advanced data analytics is referred to, collectively, as “precision agriculture” [<xref ref-type="bibr" rid="j_nejsds71_ref_005">5</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_008">8</xref>] and is quickly becoming industry standard. Broadly, precision agriculture attempts to use the spatial variability within a field to manage individual crop locations rather than treat a field as spatially and temporally homogeneous. The spatial variability within a field can be influenced by differences in elevation, aspect, or other topographical features of the field that influence both irrigation and crop yield. Given that agriculture fields often occupy multiple acres of space, precision agriculture has been shown to outperform basic farming techniques by increasing crop yield [<xref ref-type="bibr" rid="j_nejsds71_ref_030">30</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_032">32</xref>].</p>
<p>Crop yield is heavily driven by the volumetric water content (VWC; the ratio of the volume of water currently in the soil to the unit volume of soil, or in other words the quantity of water contained in the soil) and, in arid regions, irrigation is a practiced method for controlling and adjusting the VWC. As such, variable rate irrigation (VRI) is a practice within precision agriculture focused on using data to adjust the amount of water applied throughout the field according to spatial and temporal variations [<xref ref-type="bibr" rid="j_nejsds71_ref_020">20</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_039">39</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_040">40</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_025">25</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_060">60</xref>]. Such management zones can be temporally dynamic to help inform day-to-day watering decisions [<xref ref-type="bibr" rid="j_nejsds71_ref_013">13</xref>] or, as is the case in this application, static to inform season-long watering decisions.</p>
<p>One approach to VRI is to partition the field into management zones (or irrigation zones) wherein irrigation rates are homogeneous within each management zone but different across zones rather than utilizing a constant irrigation rate throughout the entire field [<xref ref-type="bibr" rid="j_nejsds71_ref_018">18</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_061">61</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_057">57</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_038">38</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_037">37</xref>]. Benefits of using VRI with management zones include reductions in water and energy input, increased crop productivity, decreased runoff (i.e. less waste) and reductions in chemical inputs and soil pollution [<xref ref-type="bibr" rid="j_nejsds71_ref_042">42</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_003">3</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_025">25</xref>].</p>
<p>Effective irrigation zones should partition the agricultural field based on VWC [<xref ref-type="bibr" rid="j_nejsds71_ref_031">31</xref>]. However, obtaining VWC is a labor intensive process and is often only sparsely measured for the entire field [<xref ref-type="bibr" rid="j_nejsds71_ref_027">27</xref>]. As an example, consider Figure <xref rid="j_nejsds71_fig_001">1</xref> which shows 66 VWC measurements averaged over four time periods between April and September 2019 for an agricultural field of winter wheat in Rexburg, Idaho along with the associated irrigation zone for each location. Although the VWC was recorded at 66 different locations, the available data do not adequately cover the field to the point where precise irrigation decisions and zones can be precisely defined. That is, based on such sparse measurement the boundaries between zones are unclear.</p>
<fig id="j_nejsds71_fig_001">
<label>Figure 1</label>
<caption>
<p>The scatterplot on the left shows the average VWC at the 66 samples throughout the field in Rexburg, Idaho. The edges of the field were excluded from this study. The scatterplot on the right shows the classified zone at the 66 samples throughout the field in Rexburg, Idaho. The average VWC was calculated from 4 VWC measurements taken between April and August of the same year.</p>
</caption>
<graphic xlink:href="nejsds71_g001.jpg"/>
</fig>
<p>Rather than using the VWC directly, agricultural scientists have begun to use alternative, more easily obtained, data to inform irrigation zones [<xref ref-type="bibr" rid="j_nejsds71_ref_050">50</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_056">56</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_026">26</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_024">24</xref>]. Examples of such possible data for the Rexburg field considered in this research are shown in Figure <xref rid="j_nejsds71_fig_002">2</xref> and include historical yields, normalized difference vegetation index (NDVI), elevation, topographical wetness index (TWI), aspect, and slope (more details on these covariates are given in the application Section <xref rid="j_nejsds71_s_008">3.1</xref> below). These alternative covariates to VWC were recorded for over 5000 unique locations in the field using simple remote sensing mounted on drones or tractors making these data readily available to farmers.</p>
<fig id="j_nejsds71_fig_002">
<label>Figure 2</label>
<caption>
<p>Heat maps for the alternative data to VWC across the field in Rexburg using remote sensing.</p>
</caption>
<graphic xlink:href="nejsds71_g002.jpg"/>
</fig>
<p>The primary issue of using alternative covariates to VWC such as that in Figure <xref rid="j_nejsds71_fig_002">2</xref> to define VRI zones is that such covariates are non-linearly related to VWC and, hence, may result in less water efficient zones than ones defined directly by VWC. Figure <xref rid="j_nejsds71_fig_003">3</xref> shows scatterplots of a few of the variables against the VWC for the Rexburg field demonstrating clear non-linear relationships. In fact, the scatterplots in Figure <xref rid="j_nejsds71_fig_003">3</xref> show a highly complex relationship between these alternative covariates and VWC suggesting that zones defined directly on the covariates may vary considerably from the more water-efficient zones defined on VWC.</p>
<fig id="j_nejsds71_fig_003">
<label>Figure 3</label>
<caption>
<p>Scatterplots of the water content against elevation, TWI, slope, and yield throughout the field. It is evident, especially for elevation and slope, that the relationship between these covariates and the water content is non-linear, justifying the use of neural networks to model this relationship.</p>
</caption>
<graphic xlink:href="nejsds71_g003.jpg"/>
</fig>
</sec>
<sec id="j_nejsds71_s_003">
<label>1.2</label>
<title>Research Goals and Contributions</title>
<p>In this research, we seek to implement a method to delineate irrigation zones based on VWC using the data displayed in Figure <xref rid="j_nejsds71_fig_002">2</xref> as covariate information. Specifically, we define a model relating the easily obtained and spatially dense covariate data to the sparse VWC data. To do so, we integrate deep learning into a statistical modeling framework for irrigation zones to capture the complex relationship between the covariate data and the response variable.</p>
<p>Deep learning is a subfield of machine learning focused on using highly flexible algorithms called neural networks to learn complex relationships between covariates and response variables [<xref ref-type="bibr" rid="j_nejsds71_ref_053">53</xref>]. Deep learning algorithms have been successfully implemented across a wide range of applications leading to their explosion in popularity in recent years [<xref ref-type="bibr" rid="j_nejsds71_ref_001">1</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_041">41</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_049">49</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_012">12</xref>]. More recently, deep learning (and machine learning more generally) have successfully incorporated spatial correlation [<xref ref-type="bibr" rid="j_nejsds71_ref_048">48</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_045">45</xref>] into these algorithms.</p>
<p>From a statistical perspective, deep learning algorithms do not inherently account for uncertainty in the predictions and are often so complex that they are uninterpretable (so-called “black box” methods). Yet, recent efforts in the statistics community have started to fold deep learning into statistical models [<xref ref-type="bibr" rid="j_nejsds71_ref_034">34</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_054">54</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_046">46</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_011">11</xref>]. While we acknowledge that some methods (such as dropout) can give uncertainty estimates [see <xref ref-type="bibr" rid="j_nejsds71_ref_015">15</xref>], we consider embedding a deep neural network into a statistical model and follow [<xref ref-type="bibr" rid="j_nejsds71_ref_052">52</xref>] by estimating associated parameters using a fully Bayesian paradigm. Further, we implement Bayesian versions of partial dependence plots to give some interpretability to the deep neural network between the covariates and the response variable.</p>
<p>With a deep neural network accounting for the complex relationships between the covariates and response, our model also creates smooth irrigation zones by incorporating spatial correlation. Specifically, we use spatial basis functions distinctly from the neural network portion of the model thereby successfully merging deep learning into a spatial modeling framework.</p>
<p>The primary quantity of interest in this research is the irrigation zone of each location on the field – not necessarily the VWC. Admittedly, we could predict the VWC first and then create irrigation zones by splitting based on predicted VWC quantiles. However, here we perform modeling for the associated zone as the response of interest given by the right panel of Figure <xref rid="j_nejsds71_fig_001">1</xref>. We focus on the zone rather than VMC for a few reasons. First, our data is unique in that we have the exact VWC measurements. However, given the cost, obtaining exact VWC data is exceptionally rare in agricultural practice (our data being an exception rather than the rule). Predicting continuous VWC would be applicable only to this field in Rexburg and would not be able to be tested on other fields. Rather, most agriculture fields are able to only collect ambiguous VWC levels of “low” or “high.” By focusing on irrigation zone, our statistical model will be more aligned with typical data available in other crop fields. Second, VRI technology is typically done by applying “less than average” water to some locations and “more than average” to others. Hence, what is required to implement VRI is knowledge of which locations are “low” and which are “high” rather than the exact VWC.</p>
<p>Given the focus on irrigation zones, the response variable is thus an ordered multinomial response. Under this discretization, we build our model following the latent variable approach of [<xref ref-type="bibr" rid="j_nejsds71_ref_002">2</xref>], [<xref ref-type="bibr" rid="j_nejsds71_ref_022">22</xref>] and [<xref ref-type="bibr" rid="j_nejsds71_ref_004">4</xref>] to facilitate Bayesian computation. While others have developed statistical models for neural networks [see <xref ref-type="bibr" rid="j_nejsds71_ref_016">16</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_034">34</xref>], to our knowledge, this is the first attempt at building a deep spatial statistical model for an ordered categorical response.</p>
<p>We view this work as a contribution to the growing body of literature of spatial deep learning methods [see <xref ref-type="bibr" rid="j_nejsds71_ref_058">58</xref>]. Alternative approaches include [<xref ref-type="bibr" rid="j_nejsds71_ref_007">7</xref>] use spatial bases as inputs into neural networks. However, here we use spatial bases separately from the neural network because using such bases as inputs may sacrifice some spatial smoothness which is crucial to our application. Likewise, [<xref ref-type="bibr" rid="j_nejsds71_ref_059">59</xref>] use generalized squared error loss functions with a spatial covariance to fit spatial graphical neural networks. Our approach is inherently different from this because our response is non-Gaussian so squared error loss functions are not directly applicable.</p>
<p>The remainder of this paper is outlined as follows. Section <xref rid="j_nejsds71_s_004">2</xref> describes our spatial neural network model along with our Bayesian strategy to estimate model parameters. Section <xref rid="j_nejsds71_s_007">3</xref> describes the result of the model fit to the Rexburg field including model tuning, predicted zone delineation and effect interpretation. Finally, Section <xref rid="j_nejsds71_s_012">4</xref> concludes by pinpointing strengths and weaknesses of our approach along with areas for future research.</p>
</sec>
</sec>
<sec id="j_nejsds71_s_004">
<label>2</label>
<title>A Spatial Neural Network Model</title>
<sec id="j_nejsds71_s_005">
<label>2.1</label>
<title>Model Specification</title>
<p>Let <inline-formula id="j_nejsds71_ineq_001"><alternatives><mml:math>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$Y(\boldsymbol{s})\in \{1,\dots ,R\}$]]></tex-math></alternatives></inline-formula> denote the irrigation zone for location <inline-formula id="j_nejsds71_ineq_002"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="double-struck">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$\boldsymbol{s}={({s_{1}},{s_{2}})^{\prime }}\in {\mathbb{R}^{2}}$]]></tex-math></alternatives></inline-formula> where <italic>R</italic> is the number of desired irrigation zones for a field. Given the current state of variable rate irrigation systems, usually <italic>R</italic> will only be as high as 4 or 5. Further, because each irrigation zone is given a different amount of water (i.e. dry irrigation zones receive more water), we can treat <inline-formula id="j_nejsds71_ineq_003"><alternatives><mml:math>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Y(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> as an <italic>ordered</italic>, discrete spatial random variable. Due to the difficulty of performing spatial analysis on a discrete scale, we follow [<xref ref-type="bibr" rid="j_nejsds71_ref_002">2</xref>], [<xref ref-type="bibr" rid="j_nejsds71_ref_022">22</xref>] and [<xref ref-type="bibr" rid="j_nejsds71_ref_004">4</xref>] by augmenting <inline-formula id="j_nejsds71_ineq_004"><alternatives><mml:math>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Y(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> with a latent variable <inline-formula id="j_nejsds71_ineq_005"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="double-struck">R</mml:mi></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})\in \mathbb{R}$]]></tex-math></alternatives></inline-formula> such that 
<disp-formula id="j_nejsds71_eq_001">
<label>(2.1)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">∣</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mn>𝟙</mml:mn>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">≤</mml:mo>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \big(Y(\boldsymbol{s})\mid \{{c_{r}}\},Z(\boldsymbol{s})\big)={\sum \limits_{r=1}^{R}}r𝟙\big({c_{r-1}}\le Z(\boldsymbol{s})\lt {c_{r}}\big)\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_006"><alternatives><mml:math>
<mml:mn>𝟙</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$𝟙(\cdot )$]]></tex-math></alternatives></inline-formula> is an indicator function and <inline-formula id="j_nejsds71_ineq_007"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo>−</mml:mo>
<mml:mi>∞</mml:mi>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mo stretchy="false">⋯</mml:mo>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>∞</mml:mi></mml:math><tex-math><![CDATA[${c_{0}}=-\infty \lt {c_{1}}=0\lt {c_{2}}\lt \cdots \lt {c_{R}}=\infty $]]></tex-math></alternatives></inline-formula> are cut points used to determine the probability that location <inline-formula id="j_nejsds71_ineq_008"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">s</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{s}$]]></tex-math></alternatives></inline-formula> belongs to a specific zone.</p>
<p>Under Equation (<xref rid="j_nejsds71_eq_001">2.1</xref>) and given the cut points <inline-formula id="j_nejsds71_ineq_009"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${c_{0}},\dots ,{c_{R}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_nejsds71_ineq_010"><alternatives><mml:math>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Y(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> is completely determined by <inline-formula id="j_nejsds71_ineq_011"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> so that modeling <inline-formula id="j_nejsds71_ineq_012"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> will induce a statistical model for <inline-formula id="j_nejsds71_ineq_013"><alternatives><mml:math>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Y(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula>. Using this relationship between <inline-formula id="j_nejsds71_ineq_014"><alternatives><mml:math>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Y(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_015"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula>, we assume 
<disp-formula id="j_nejsds71_eq_002">
<label>(2.2)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">∣</mml:mo>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
<mml:mtd class="align-even">
<mml:mo stretchy="false">∼</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}\big(Z(\boldsymbol{s})\mid w(\boldsymbol{s})\big)& \sim \mathcal{N}\big({f_{L}}\big(\boldsymbol{X}(\boldsymbol{s})\big)+w(\boldsymbol{s}),1\big)\end{aligned}\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_016"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${f_{L}}(\boldsymbol{X}(\boldsymbol{s}))$]]></tex-math></alternatives></inline-formula> is the univariate output function of an <italic>L</italic>-layer feed forward neural network (FFNN) or, alternatively referred to as a multilayer perceptron (MLP) with input covariates <inline-formula id="j_nejsds71_ineq_017"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$\boldsymbol{X}(\boldsymbol{s})={({X_{1}}(\boldsymbol{s}),\dots ,{X_{P}}(\boldsymbol{s}))^{\prime }}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_018"><alternatives><mml:math>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$w(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> is a spatial random effect. Under this model structure, we follow [<xref ref-type="bibr" rid="j_nejsds71_ref_002">2</xref>] by fixing the variance at 1 to identify the scale of the weights of the neural networks (coefficients) and we set <inline-formula id="j_nejsds71_ineq_019"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn></mml:math><tex-math><![CDATA[${c_{1}}=0$]]></tex-math></alternatives></inline-formula> to identify a bias term. This model, importantly, accounts for the various challenges of determining irrigation zones via statistical modeling discussed in Section <xref rid="j_nejsds71_s_003">1.2</xref>. First, the FFNN in (<xref rid="j_nejsds71_eq_002">2.2</xref>) utilizes a nonlinear relationship between the set of covariates <inline-formula id="j_nejsds71_ineq_020"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\boldsymbol{X}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> and the associated irrigation zone (see Figure <xref rid="j_nejsds71_fig_003">3</xref>). Second, the spatial random effect <inline-formula id="j_nejsds71_ineq_021"><alternatives><mml:math>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$w(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> in (<xref rid="j_nejsds71_eq_002">2.2</xref>) ensures that the fitted irrigation zones change smoothly over space thereby creating zones that can be reasonably implemented via VRI. Consider each piece of the model in the following paragraphs.</p>
<p>FFNNs have become known for the strong performance in prediction, especially where non-linear relationships exist between covariates and the response variable as is the case here [<xref ref-type="bibr" rid="j_nejsds71_ref_012">12</xref>]. A FFNN with <italic>L</italic> layers, consists of three different types of layers: an input layer, hidden layers and an output layer each consisting of a different number of units. More succinctly, let <inline-formula id="j_nejsds71_ineq_022"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">l</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${\boldsymbol{f}_{l}}(\boldsymbol{X}(\boldsymbol{s}))={({f_{\ell 1}}(\boldsymbol{X}(\boldsymbol{s})),\dots ,{f_{\ell {P_{\ell }}}}(\boldsymbol{X}(\boldsymbol{s})))^{\prime }}$]]></tex-math></alternatives></inline-formula> be the vector of <inline-formula id="j_nejsds71_ineq_023"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\ell }}$]]></tex-math></alternatives></inline-formula> units at layer <italic>ℓ</italic> of the FFNN. The transformation from layer to layer occurs via 
<disp-formula id="j_nejsds71_eq_003">
<label>(2.3)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
<mml:mtd class="align-even">
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}{\boldsymbol{f}_{\ell }}\big(\boldsymbol{X}(\boldsymbol{s})\big)& ={a_{\ell }}\big({\boldsymbol{\lambda }_{(\ell -1)0}}+{\boldsymbol{\Lambda }_{\ell -1}}{\boldsymbol{f}_{\ell -1}}\big(\boldsymbol{X}(\boldsymbol{s})\big)\big)\end{aligned}\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_024"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${a_{\ell }}(\cdot )$]]></tex-math></alternatives></inline-formula> is a element-wise nonlinear activation function used at layer <italic>ℓ</italic>, <inline-formula id="j_nejsds71_ineq_025"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mn>01</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mn>0</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${\boldsymbol{\lambda }_{(\ell -1)0}}={({\lambda _{(\ell -1)01}},\dots ,{\lambda _{(\ell -1)0{P_{\ell }}}})^{\prime }}$]]></tex-math></alternatives></inline-formula> is the vector of intercepts (biases) applied to layer <inline-formula id="j_nejsds71_ineq_026"><alternatives><mml:math>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$\ell -1$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_027"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\boldsymbol{\Lambda }_{\ell -1}}={\{{\lambda _{(\ell -1)ij}}\}_{i,j}}$]]></tex-math></alternatives></inline-formula> is the <inline-formula id="j_nejsds71_ineq_028"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>×</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\ell }}\times {P_{\ell -1}}$]]></tex-math></alternatives></inline-formula> matrix of coefficients (weights) used to transition from layer <inline-formula id="j_nejsds71_ineq_029"><alternatives><mml:math>
<mml:mi>ℓ</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$\ell -1$]]></tex-math></alternatives></inline-formula> to layer <italic>ℓ</italic>. For the input layer (i.e. <inline-formula id="j_nejsds71_ineq_030"><alternatives><mml:math>
<mml:mi>ℓ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$\ell =1$]]></tex-math></alternatives></inline-formula>), we take <inline-formula id="j_nejsds71_ineq_031"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${\boldsymbol{f}_{1}}(\boldsymbol{X}(\boldsymbol{s}))=\boldsymbol{X}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> such that <inline-formula id="j_nejsds71_ineq_032"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">P</mml:mi></mml:math><tex-math><![CDATA[${P_{1}}=P$]]></tex-math></alternatives></inline-formula> is the number of covariates included in the model. From a statistical point of view, the weights and biases (intercepts and coefficients) <inline-formula id="j_nejsds71_ineq_033"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${\{{\boldsymbol{\lambda }_{\ell 0}},{\boldsymbol{\Lambda }_{\ell }}\}_{\ell =1}^{L-1}}$]]></tex-math></alternatives></inline-formula> are the model parameters to be estimated from the data while the number of layers <italic>L</italic>, the layer dimensions <inline-formula id="j_nejsds71_ineq_034"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{P_{\ell }}\}$]]></tex-math></alternatives></inline-formula> and the activation functions are known as “tuning parameters” and are fixed via cross-validation to optimize prediction performance.</p>
<p>The flexibility of FFNNs to capture complex relationships comes from appropriate choice of <italic>L</italic>, <inline-formula id="j_nejsds71_ineq_035"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{P_{\ell }}\}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_036"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{a_{\ell }}(\cdot )\}$]]></tex-math></alternatives></inline-formula>. First, deeper networks (larger <italic>L</italic>) may be required to allow for more complex transformations of <inline-formula id="j_nejsds71_ineq_037"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\boldsymbol{X}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> between the input and output layers. Conversely, in the simplest case, when <inline-formula id="j_nejsds71_ineq_038"><alternatives><mml:math>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn></mml:math><tex-math><![CDATA[$L=2$]]></tex-math></alternatives></inline-formula>, the relationship between <inline-formula id="j_nejsds71_ineq_039"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\boldsymbol{X}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_040"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> would be linear provided <inline-formula id="j_nejsds71_ineq_041"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi></mml:math><tex-math><![CDATA[${a_{L}}(x)=x$]]></tex-math></alternatives></inline-formula> is the identity function.</p>
<p>Beyond the number of layers, the number of units per layer (denoted by <inline-formula id="j_nejsds71_ineq_042"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\ell }}$]]></tex-math></alternatives></inline-formula>) can also be modified to capture complex relationships between the covariates and the response variable. For example, if <inline-formula id="j_nejsds71_ineq_043"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\ell }}\gt {P_{1}}$]]></tex-math></alternatives></inline-formula> for any <inline-formula id="j_nejsds71_ineq_044"><alternatives><mml:math>
<mml:mi>ℓ</mml:mi>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$\ell \gt 1$]]></tex-math></alternatives></inline-formula> then the FFNN uses a dimension expansion to model the relationship between covariates and the response [<xref ref-type="bibr" rid="j_nejsds71_ref_055">55</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_006">6</xref>, <xref ref-type="bibr" rid="j_nejsds71_ref_047">47</xref>]. As recently shown by [<xref ref-type="bibr" rid="j_nejsds71_ref_028">28</xref>] and [<xref ref-type="bibr" rid="j_nejsds71_ref_033">33</xref>] but pioneered by [<xref ref-type="bibr" rid="j_nejsds71_ref_035">35</xref>], an infinite dimension expansion would mimic a Gaussian process regression model between the covariates and response. Conversely, if <inline-formula id="j_nejsds71_ineq_045"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\ell }}\lt {P_{1}}$]]></tex-math></alternatives></inline-formula> for any <inline-formula id="j_nejsds71_ineq_046"><alternatives><mml:math>
<mml:mi>ℓ</mml:mi>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$\ell \gt 1$]]></tex-math></alternatives></inline-formula> then the transition in (<xref rid="j_nejsds71_eq_003">2.3</xref>) acts similar to a principal components regression model by reducing the dimension of the input space [see <xref ref-type="bibr" rid="j_nejsds71_ref_014">14</xref>, Chapter 5].</p>
<p>The activation functions <inline-formula id="j_nejsds71_ineq_047"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${\{{a_{\ell }}(\cdot )\}_{l=2}^{L}}$]]></tex-math></alternatives></inline-formula> in (<xref rid="j_nejsds71_eq_003">2.3</xref>) are typically non-linear to ensure that the relationship between <inline-formula id="j_nejsds71_ineq_048"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\boldsymbol{X}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_049"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> is also non-linear. Typical choices of <inline-formula id="j_nejsds71_ineq_050"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${a_{\ell }}(\cdot )$]]></tex-math></alternatives></inline-formula> include rectified linear units, hyperbolic tangents and identity (see [<xref ref-type="bibr" rid="j_nejsds71_ref_036">36</xref>] and [<xref ref-type="bibr" rid="j_nejsds71_ref_044">44</xref>] for a discussion of these activations). However, simple algebraic manipulations show that if <inline-formula id="j_nejsds71_ineq_051"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi></mml:math><tex-math><![CDATA[${a_{\ell }}(\boldsymbol{x})=\boldsymbol{x}$]]></tex-math></alternatives></inline-formula> (the identity activation) for all <italic>ℓ</italic> then the relationship between <inline-formula id="j_nejsds71_ineq_052"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_053"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\boldsymbol{X}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> would be linear. Notably, to increase flexibility, different layers can utilize different activation functions but, traditionally, the same activation function is used for all layers with the exception of the output layer where the domain of <inline-formula id="j_nejsds71_ineq_054"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${a_{L}}(\cdot )$]]></tex-math></alternatives></inline-formula> must match the support of <inline-formula id="j_nejsds71_ineq_055"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> (which, in our cases, <inline-formula id="j_nejsds71_ineq_056"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${a_{L}}(\cdot )$]]></tex-math></alternatives></inline-formula> is the identity function).</p>
<p>While <inline-formula id="j_nejsds71_ineq_057"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${f_{L}}(\boldsymbol{X}(\boldsymbol{s}))$]]></tex-math></alternatives></inline-formula> captures the non-linear relationship between the response and covariates, the spatial random effect <inline-formula id="j_nejsds71_ineq_058"><alternatives><mml:math>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$w(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> serves to smooth the predicted zones over space. To achieve this smoothing, we opt to use the Moran basis function expansion for <inline-formula id="j_nejsds71_ineq_059"><alternatives><mml:math>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$w(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> advocated by, among others, [<xref ref-type="bibr" rid="j_nejsds71_ref_023">23</xref>]. To calculate the Moran basis functions, we first determine an adjacency matrix <inline-formula id="j_nejsds71_ineq_060"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\boldsymbol{A}=\{{a_{ij}}\}$]]></tex-math></alternatives></inline-formula> between all observed locations and desired prediction locations according to the inverse weighting distance 
<disp-formula id="j_nejsds71_eq_004">
<label>(2.4)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced separators="" open="{" close="">
<mml:mrow>
<mml:mtable columnspacing="10.0pt" equalrows="false" columnlines="none" equalcolumns="false" columnalign="left left">
<mml:mtr>
<mml:mtd class="array">
<mml:mn>0</mml:mn>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mtext>if</mml:mtext>
<mml:mspace width="2.5pt"/>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="array">
<mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">‖</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">‖</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mtext>otherwise</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {a_{ij}}=\left\{\begin{array}{l@{\hskip10.0pt}l}0\hspace{1em}& \text{if}\hspace{2.5pt}i=j\\ {} \frac{1}{\| {\boldsymbol{s}_{i}}-{\boldsymbol{s}_{j}}\| }\hspace{1em}& \text{otherwise}\end{array}\right.\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_061"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\boldsymbol{s}_{i}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_062"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\boldsymbol{s}_{j}}$]]></tex-math></alternatives></inline-formula> are two locations on the field (either observed or predicted). Let <inline-formula id="j_nejsds71_ineq_063"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">P</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mo>−</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn mathvariant="bold">11</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" stretchy="false">/</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{P}=\boldsymbol{I}-{\mathbf{11}^{\prime }}/n$]]></tex-math></alternatives></inline-formula> where <inline-formula id="j_nejsds71_ineq_064"><alternatives><mml:math>
<mml:mn mathvariant="bold">1</mml:mn></mml:math><tex-math><![CDATA[$\mathbf{1}$]]></tex-math></alternatives></inline-formula> is a vector of ones and <italic>n</italic> is the total data size (number of observed locations plus the number of desired prediction locations). Let <inline-formula id="j_nejsds71_ineq_065"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">B</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{B}$]]></tex-math></alternatives></inline-formula> be the <inline-formula id="j_nejsds71_ineq_066"><alternatives><mml:math>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi></mml:math><tex-math><![CDATA[$n\times K$]]></tex-math></alternatives></inline-formula> matrix of the <italic>K</italic> eigenvectors associated with the <italic>K</italic> largest positive eigenvalues of <inline-formula id="j_nejsds71_ineq_067"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">P</mml:mi>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mi mathvariant="bold-italic">P</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{P}\boldsymbol{A}\boldsymbol{P}$]]></tex-math></alternatives></inline-formula>. Then, we set 
<disp-formula id="j_nejsds71_eq_005">
<label>(2.5)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="bold-italic">β</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ w(\boldsymbol{s})={\boldsymbol{b}^{\prime }}(\boldsymbol{s})\boldsymbol{\beta }\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_068"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${\boldsymbol{b}^{\prime }}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> the row of <inline-formula id="j_nejsds71_ineq_069"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">B</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{B}$]]></tex-math></alternatives></inline-formula> associated with location <inline-formula id="j_nejsds71_ineq_070"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">s</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{s}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_071"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">β</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{\beta }$]]></tex-math></alternatives></inline-formula> is a vector of coefficients to be estimated from the data. Generally, as <italic>K</italic> increases then the associated fitted spatial surface from the Moran basis is more variable. Hence, for purposes of this research, we treat <italic>K</italic> as an additional tuning parameter that we will choose via cross-validation.</p>
</sec>
<sec id="j_nejsds71_s_006">
<label>2.2</label>
<title>Parameter Estimation</title>
<p>Even though neural networks are typically fit via loss minimization, in an effort to merge machine learning and statistical modeling, we adopt a Bayesian approach for parameter estimation to ensure that our predicted irrigation zone surface accounts for associated parameter uncertainty. Accounting for uncertainty here is important so that the predicted irrigation zones can be potentially altered to more easily be incorporated into a VRI system. For example, if the zone assigned to a certain location is highly uncertain, then that location can be manually assigned to a zone by the farmers to increase the overall efficiency of the VRI.</p>
<p>Under the Bayesian approach, prior distributions are required for all the neural network parameters <inline-formula id="j_nejsds71_ineq_072"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{\boldsymbol{\lambda }_{\ell 0}},{\boldsymbol{\lambda }_{\ell }}\}$]]></tex-math></alternatives></inline-formula>, the Moran basis coefficients <inline-formula id="j_nejsds71_ineq_073"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">β</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$(\boldsymbol{\beta })$]]></tex-math></alternatives></inline-formula> and the cut points <inline-formula id="j_nejsds71_ineq_074"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${c_{2}},\dots ,{c_{R}}$]]></tex-math></alternatives></inline-formula> and parameter estimation is done via posterior inference. Because the <inline-formula id="j_nejsds71_ineq_075"><alternatives><mml:math>
<mml:mi mathvariant="bold-italic">β</mml:mi></mml:math><tex-math><![CDATA[$\boldsymbol{\beta }$]]></tex-math></alternatives></inline-formula> parameter vector is simply coefficients in an ordered probit regression model, we assume a vague <inline-formula id="j_nejsds71_ineq_076"><alternatives><mml:math>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$N(0,100\boldsymbol{I})$]]></tex-math></alternatives></inline-formula> prior distribution because the data can generally estimate these parameters well [<xref ref-type="bibr" rid="j_nejsds71_ref_022">22</xref>]. Because the cut points are ordered so that <inline-formula id="j_nejsds71_ineq_077"><alternatives><mml:math>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mo stretchy="false">⋯</mml:mo>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>∞</mml:mi></mml:math><tex-math><![CDATA[$0\lt {c_{2}}\lt \cdots \lt {c_{R}}=\infty $]]></tex-math></alternatives></inline-formula>, each cutpoint was transformed according to <inline-formula id="j_nejsds71_ineq_078"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mo movablelimits="false">log</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${c_{2}^{\mathrm{\star }}}=\log ({c_{2}})$]]></tex-math></alternatives></inline-formula> and 
<disp-formula id="j_nejsds71_eq_006">
<label>(2.6)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mtext>log</mml:mtext>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {c_{r}^{\mathrm{\star }}}=\text{log}({c_{r}}-{c_{r-1}})\]]]></tex-math></alternatives>
</disp-formula> 
for <inline-formula id="j_nejsds71_ineq_079"><alternatives><mml:math>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>3</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$r=3,\dots ,R-1$]]></tex-math></alternatives></inline-formula> and a <inline-formula id="j_nejsds71_ineq_080"><alternatives><mml:math>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>10</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\mathcal{N}(0,10)$]]></tex-math></alternatives></inline-formula> prior was used for each <inline-formula id="j_nejsds71_ineq_081"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${c_{r}^{\mathrm{\star }}}$]]></tex-math></alternatives></inline-formula>. The corresponding back transformation <inline-formula id="j_nejsds71_ineq_082"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo movablelimits="false">exp</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[${c_{r}}={\textstyle\sum _{i=2}^{r}}\exp \{{c_{i}^{\mathrm{\star }}}\}$]]></tex-math></alternatives></inline-formula> ensures the ordering constraint.</p>
<p>The priors for the FFNN weights <inline-formula id="j_nejsds71_ineq_083"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="normal">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{\Lambda _{\ell }}\}$]]></tex-math></alternatives></inline-formula> and biases <inline-formula id="j_nejsds71_ineq_084"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{\lambda _{0\ell }}\}$]]></tex-math></alternatives></inline-formula> need to be chosen with care to avoid overfitting. As documented by, among others, [<xref ref-type="bibr" rid="j_nejsds71_ref_010">10</xref>], neural networks can easily overfit training data resulting in poor predictive performance. One common approach for restricting neural networks is via penalization (regularization). In a Bayesian setting, regularization is enforced via informative prior constraints [see <xref ref-type="bibr" rid="j_nejsds71_ref_043">43</xref>, for a review]. As such, we assume <italic>a priori</italic> independent <inline-formula id="j_nejsds71_ineq_085"><alternatives><mml:math>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>0.01</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\mathcal{N}(0,0.01)$]]></tex-math></alternatives></inline-formula> priors for all biases and weights. The highly informative 0.01 variance constrains the biases and weights to be near zero; thus preventing overfitting similar to ridge and LASSO regression models. For our application below, we tried a few different values for the prior variance here but ultimately settled on 0.01 as the best performing prior in terms of predictive accuracy (see Section <xref rid="j_nejsds71_s_008">3.1</xref> below). Admittedly, a Laplace prior could be used for the weights because it corresponds to the LASSO penalty [<xref ref-type="bibr" rid="j_nejsds71_ref_051">51</xref>]. For our purposes, the above Gaussian prior worked adequately to prevent overfitting as demonstrated by our cross-validation study below.</p>
<p>Posterior inference for our model parameters was accomplished by using Markov chain Monte Carlo (MCMC) sampling. Conditional on all other parameters, the complete conditional distribution for <inline-formula id="j_nejsds71_ineq_086"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> is a truncated Gaussian distribution with mean <inline-formula id="j_nejsds71_ineq_087"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="bold-italic">β</mml:mi></mml:math><tex-math><![CDATA[${f_{L}}(\boldsymbol{X}(\boldsymbol{s}))+{\boldsymbol{b}^{\prime }}(\boldsymbol{s})\boldsymbol{\beta }$]]></tex-math></alternatives></inline-formula>, variance 1 and endpoints <inline-formula id="j_nejsds71_ineq_088"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${c_{Y(\boldsymbol{s})-1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_089"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${c_{Y(\boldsymbol{s})}}$]]></tex-math></alternatives></inline-formula>. Because each <inline-formula id="j_nejsds71_ineq_090"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> is conditionally independent, this sampling can be done independently and efficiently. Next, because <inline-formula id="j_nejsds71_ineq_091"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="double-struck">R</mml:mi></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})\in \mathbb{R}$]]></tex-math></alternatives></inline-formula>, we use assume <inline-formula id="j_nejsds71_ineq_092"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${a_{L}}(\cdot )$]]></tex-math></alternatives></inline-formula> is the identity activation and <inline-formula id="j_nejsds71_ineq_093"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[${P_{L}}=1$]]></tex-math></alternatives></inline-formula> so that 
<disp-formula id="j_nejsds71_eq_007">
<label>(2.7)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mover>
<mml:mrow>
<mml:mo stretchy="false">∼</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">d</mml:mi>
</mml:mrow>
</mml:mover>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="bold-italic">β</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ Z(\boldsymbol{s})\stackrel{iid}{\sim }\mathcal{N}\big({\lambda _{0(L-1)}}+{\boldsymbol{f}^{\prime }_{L-1}}\big(\boldsymbol{X}(\boldsymbol{s})\big){\boldsymbol{\Lambda }_{L-1}}+{\boldsymbol{b}^{\prime }}(\boldsymbol{s})\boldsymbol{\beta },1\big)\]]]></tex-math></alternatives>
</disp-formula> 
which can be rewritten simply as <inline-formula id="j_nejsds71_ineq_094"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo stretchy="false">∼</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})\sim \mathcal{N}({\boldsymbol{X}^{\prime }_{\mathrm{\star }}}(\boldsymbol{s}){\boldsymbol{\beta }^{\mathrm{\star }}},1)$]]></tex-math></alternatives></inline-formula> where <inline-formula id="j_nejsds71_ineq_095"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${\boldsymbol{\beta }^{\mathrm{\star }}}={({\lambda _{0(L-1)}},{\boldsymbol{\Lambda }^{\prime }_{L-1}},{\boldsymbol{\beta }^{\prime }})^{\prime }}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_096"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${\boldsymbol{X}_{\mathrm{\star }}}(\boldsymbol{s})={(1,{\boldsymbol{f}^{\prime }_{L-1}}(\boldsymbol{X}(\boldsymbol{s})),{\boldsymbol{b}^{\prime }}(\boldsymbol{s}))^{\prime }}$]]></tex-math></alternatives></inline-formula>. Under the above Gaussian priors, the complete conditional for <inline-formula id="j_nejsds71_ineq_097"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${\boldsymbol{\beta }^{\mathrm{\star }}}$]]></tex-math></alternatives></inline-formula> is Gaussian and can be sampled directly.</p>
<p>While <inline-formula id="j_nejsds71_ineq_098"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{Z(\boldsymbol{s})\}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_099"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="bold-italic">β</mml:mi>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{\lambda _{0(L-1)}},{\boldsymbol{\Lambda }_{L-1}},\boldsymbol{\beta }\}$]]></tex-math></alternatives></inline-formula> can be sampled directly from the corresponding complete conditional distributions, the cut points <inline-formula id="j_nejsds71_ineq_100"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${c_{2}},\dots ,{c_{R-1}}$]]></tex-math></alternatives></inline-formula> and other FFNN parameters <inline-formula id="j_nejsds71_ineq_101"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mi mathvariant="italic">l</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">l</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${\{{\boldsymbol{\lambda }_{0l}},{\boldsymbol{\Lambda }_{l}}\}_{l=1}^{L-2}}$]]></tex-math></alternatives></inline-formula> need to be sampled indirectly via Metropolis-Hastings or other algorithms. In early phases of this research, the weights and biases were sampled individually or a single column at a time. However, this proved to be computationally expensive and there was high correlation between the different weights within each layer. As a result, our final MCMC algorithms sampled the neural net weight matrix <inline-formula id="j_nejsds71_ineq_102"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">Λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\boldsymbol{\Lambda }_{\ell }}$]]></tex-math></alternatives></inline-formula> and the biases <inline-formula id="j_nejsds71_ineq_103"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\boldsymbol{\lambda }_{0\ell }}$]]></tex-math></alternatives></inline-formula> were sampled jointly for each layer (with each layer being sampled separately). For our MCMC algorithm, we used the adaptive Metropolis algorithm from [<xref ref-type="bibr" rid="j_nejsds71_ref_017">17</xref>] to update the proposal variance and achieve better mixing for the FFNN parameters. Finally, this adaptive Metropolis algorithm was again used to sample the transformed cutpoints <inline-formula id="j_nejsds71_ineq_104"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal">⋆</mml:mo>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${c_{2}^{\mathrm{\star }}},\dots ,{c_{R-1}^{\mathrm{\star }}}$]]></tex-math></alternatives></inline-formula> after marginalizing out <inline-formula id="j_nejsds71_ineq_105"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> as suggested by [<xref ref-type="bibr" rid="j_nejsds71_ref_022">22</xref>].</p>
</sec>
</sec>
<sec id="j_nejsds71_s_007">
<label>3</label>
<title>Application to Rexburg Field</title>
<sec id="j_nejsds71_s_008">
<label>3.1</label>
<title>Model Settings and MCMC Diagnostics</title>
<p>For our application, we used ten covariates: elevation, yield, NDVI index for 2018 and 2019, two different measures of slope at a location, the <italic>x</italic> and <italic>y</italic> aspect of a location and a topographical wetness index (TWI). Each of the covariates were observed at 5062 <inline-formula id="j_nejsds71_ineq_106"><alternatives><mml:math>
<mml:mn>10</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:msup>
<mml:mrow>
<mml:mtext>m</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$10\hspace{2.5pt}{\text{m}^{2}}$]]></tex-math></alternatives></inline-formula> areas in the field (see Figure <xref rid="j_nejsds71_fig_002">2</xref>). In this application, NDVI (a measure of vegetation greenness) was calculated using an aerial drone at four distinct times throughout a growing season and averaged as a single NDVI measurement for the year. Yield was measured per <inline-formula id="j_nejsds71_ineq_107"><alternatives><mml:math>
<mml:mn>10</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:msup>
<mml:mrow>
<mml:mtext>m</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$10\hspace{2.5pt}{\text{m}^{2}}$]]></tex-math></alternatives></inline-formula> unit of land using a grain yield monitor installed on the harvester. TWI, is a static measure, calculated from a digital elevation models, and indicates where water will accumulate in an area with elevation variability. Slope as well as the <italic>x</italic> and <italic>y</italic> aspects were likewise calculated from a digital elevation model.</p>
<p>The data of irrigation zone consisted of only the 66 observations shown in the right panel of Figure <xref rid="j_nejsds71_fig_001">1</xref>. For our implementation, we set the number of irrigation zones, <italic>R</italic>, to 3. As previously mentioned, <italic>R</italic> will rarely if ever be higher than 4 or 5 and, in discussion with the farm owner, 3 zones was determined to be reasonable based on the field’s VRI capacities. We also used rectified linear unit activation functions for all the activation functions with the exception of the output layer which was an identity activation to match the support of <inline-formula id="j_nejsds71_ineq_108"><alternatives><mml:math>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$Z(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula>. Certainly, other activation functions could be used but the rectified linear units is one of the most common.</p>
<p>Beyond the above model settings, implementation of our spatial neural network model requires tuning the number of layers (<italic>L</italic>), the number of units per layer (<inline-formula id="j_nejsds71_ineq_109"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>ℓ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{P_{\ell }}\}$]]></tex-math></alternatives></inline-formula>), the number of Moran basis functions (<italic>K</italic>) and the penalization for the weights in the prior (the prior variance). Computationally, fitting the neural network model at a single setting took about 3.5 hours on a 2.60 GHz CPU. For each parameter setting in Table <xref rid="j_nejsds71_tab_001">1</xref> and for each of 0.1, 0.01 and 0.001 for the prior variance of the weights, we implemented a 6-fold cross validation and calculated the average adjusted Rand index (ARI). The ARI is given by 
<disp-formula id="j_nejsds71_eq_008">
<alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mi mathvariant="italic">A</mml:mi>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mfenced separators="" open="" close="/">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:mo>−</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:mo fence="true" stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mfenced separators="" open="" close="/">
<mml:mrow>
<mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:mo fence="true" stretchy="false">]</mml:mo>
<mml:mo>−</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:mo fence="true" stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mfrac linethickness="0.0pt">
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ ARI=\frac{\left.{\textstyle\sum _{ij}}\left(\genfrac{}{}{0.0pt}{}{{n_{ij}}}{2}\right)-[{\textstyle\sum _{i}}\left(\genfrac{}{}{0.0pt}{}{{a_{i}}}{2}\right){\textstyle\sum _{j}}\left(\genfrac{}{}{0.0pt}{}{{b_{j}}}{2}\right)]\right/\left(\genfrac{}{}{0.0pt}{}{n}{2}\right)}{\left.\frac{1}{2}[{\textstyle\sum _{i}}\left(\genfrac{}{}{0.0pt}{}{{a_{i}}}{2}\right)+{\textstyle\sum _{j}}\left(\genfrac{}{}{0.0pt}{}{{b_{j}}}{2}\right)]-[{\textstyle\sum _{i}}\left(\genfrac{}{}{0.0pt}{}{{a_{i}}}{2}\right){\textstyle\sum _{j}}\left(\genfrac{}{}{0.0pt}{}{{b_{j}}}{2}\right)]\right/\left(\genfrac{}{}{0.0pt}{}{n}{2}\right)}\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_110"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${a_{r}}$]]></tex-math></alternatives></inline-formula> is the number locations predicted to belong to irrigation zone <italic>r</italic> and <inline-formula id="j_nejsds71_ineq_111"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${b_{r}}$]]></tex-math></alternatives></inline-formula> is the number of true locations belonging to irrigation zone <italic>r</italic> for <inline-formula id="j_nejsds71_ineq_112"><alternatives><mml:math>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">R</mml:mi></mml:math><tex-math><![CDATA[$r=1,\dots ,R$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_113"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${n_{ij}}$]]></tex-math></alternatives></inline-formula> is number of location classified as zone <italic>i</italic> in the predictions and zone <italic>j</italic> in the truth. ARI, intuitively, is a measure of similarity between two data clusterings (in our case, the similarity between the true zone and the predicted zone) and ranges from −1 to 1 where 1 indicates perfect agreement between the two clusterings, 0 indicates a random agreement and −1 indicates that the two clusterings are completely different. Prior to fitting the models in Table <xref rid="j_nejsds71_tab_001">1</xref>, a larger grid was first used to get a general idea of the reasonable values for the tuning parameters. Given our data set consisted of only 66 observations, an effort was made to keep the total number of the parameters low. Deeper neural networks, generally, require big data to be effective. Hence, the grid search only examines one or two layer neural networks (in addition to the input and output layers). The maximum number of Moran basis functions considered was 10 in order to make sure that the zones were being determined by the neural network predictions rather than being overly driven by the spatial aspect of our model.</p>
<p>Table <xref rid="j_nejsds71_tab_001">1</xref> shows the cross validation results using the 0.01 prior variance because this value was uniformly better in terms of predictive accuracy. Cross-validation finds that a single hidden layer with 10 neurons and 5 Moran basis functions were the ideal parameters for this data. The results displayed in the following subsections are from this model setting. Note that, generally, from Table <xref rid="j_nejsds71_tab_001">1</xref>, adding spatial basis functions improved the model’s predictive ability suggesting that merging spatial modeling techniques with deep learning is of value in this particular setting.</p>
<table-wrap id="j_nejsds71_tab_001">
<label>Table 1</label>
<caption>
<p>Cross Validation results. The first column lists the number of neurons for each layer, the second column indicates the number of layers for the neural net, the third column shows the number of Moran basis functions, and the fourth column gives the average adjusted Rand index over the 6 folds.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: center; border-top: double; border-bottom: solid thin">Neurons</td>
<td style="vertical-align: top; text-align: center; border-top: double; border-bottom: solid thin">Layers</td>
<td style="vertical-align: top; text-align: center; border-top: double; border-bottom: solid thin; border-right: solid thin">Spatial</td>
<td style="vertical-align: top; text-align: center; border-top: double; border-bottom: solid thin">Rand index</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: center">10</td>
<td style="vertical-align: top; text-align: center">1</td>
<td style="vertical-align: top; text-align: center; border-right: solid thin">0</td>
<td style="vertical-align: top; text-align: center">.06767</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center">(10,10)</td>
<td style="vertical-align: top; text-align: center">2</td>
<td style="vertical-align: top; text-align: center; border-right: solid thin">0</td>
<td style="vertical-align: top; text-align: center">.1776</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center">20</td>
<td style="vertical-align: top; text-align: center">1</td>
<td style="vertical-align: top; text-align: center; border-right: solid thin">0</td>
<td style="vertical-align: top; text-align: center">.12296</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center">10</td>
<td style="vertical-align: top; text-align: center">1</td>
<td style="vertical-align: top; text-align: center; border-right: solid thin">5</td>
<td style="vertical-align: top; text-align: center">.31795</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center">(10,10)</td>
<td style="vertical-align: top; text-align: center">2</td>
<td style="vertical-align: top; text-align: center; border-right: solid thin">5</td>
<td style="vertical-align: top; text-align: center">.22377</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center; border-bottom: solid thin">20</td>
<td style="vertical-align: top; text-align: center; border-bottom: solid thin">1</td>
<td style="vertical-align: top; text-align: center; border-bottom: solid thin; border-right: solid thin">5</td>
<td style="vertical-align: top; text-align: center; border-bottom: solid thin">.13879</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>When using Markov chain Monte Carlo sampling in a Bayesian framework as is the case here, it is important to make sure that the algorithm provides samples of the parameters from posterior distribution via convergence diagnostics. The supplementary material to this article includes trace plots of <inline-formula id="j_nejsds71_ineq_114"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${f_{L}}(\boldsymbol{X}(\boldsymbol{s}))+w(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> for four different locations along with a trace plot of an example cut point. We focus on these trace plots as <inline-formula id="j_nejsds71_ineq_115"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${f_{L}}(\boldsymbol{X}(\boldsymbol{s}))+w(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> and the cut point because these are the main quantities from which we derive our irrigation zone delineation. These trace plots show that these parameters have converged and they can be used to for posterior inference.</p>
</sec>
<sec id="j_nejsds71_s_009">
<label>3.2</label>
<title>Model Comparisons</title>
<p>As a means to validate the use of the spatial neural network model proposed in this analysis, three additional alternative models were fit to the data and the ARI was again calculated for the 66 locations across the field in Rexburg. The four models compared are ordinal logistic regression with linear effects (Logistic), ordinal logistic regression with natural splines (NSLogistic), ordinal logistic regression with natural splines and spatial bases (Spatial NSLogistic), the neural network model with no spatial basis functions (NN) and our full neural network model with spatial basis functions (Spatial NN). Ordinal logistic regression is a common approach to modeling discrete ordinal multinomial response variables. One of the drawbacks is the assumptions that the relationship between predictors and the response is linear. Splines are a common approach to modeling non-linear relationships for statisticians, while neural networks are often a machine learning approach to account for these non-linear relationships. These two models can provide a comparison between these two different approaches to model a non-linear relationship. While neural networks and splines are able to account for the non-linear relationships, they don’t incorporate any spatial correlation. For irrigation zone delineation, the spatial correlation is used to smooth out the irrigation zone predictions across the field. As a result, we also included the spatial basis functions beyond the natural splines and/or neural network to help improve the predictions of the zones.</p>
<table-wrap id="j_nejsds71_tab_002">
<label>Table 2</label>
<caption>
<p>RAND index for the four different models for the 66 locations in the field.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: center; border-top: double; border-bottom: solid thin; border-right: solid thin">Model</td>
<td style="vertical-align: top; text-align: center; border-top: double; border-bottom: solid thin">ARI</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: center; border-right: solid thin">Logistic</td>
<td style="vertical-align: top; text-align: center">0.2065</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center; border-right: solid thin">NSLogistic</td>
<td style="vertical-align: top; text-align: center">0.2273</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center; border-right: solid thin">Spatial NSLogistic</td>
<td style="vertical-align: top; text-align: center">0.2412</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center; border-right: solid thin">NN</td>
<td style="vertical-align: top; text-align: center">0.2245</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: center; border-bottom: solid thin; border-right: solid thin">Spatial NN</td>
<td style="vertical-align: top; text-align: center; border-bottom: solid thin">0.2741</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Table <xref rid="j_nejsds71_tab_002">2</xref> shows the adjusted Rand index for the four models using all 66 locations in the field. The results show that the ordinal logistic regression model is least able to delineate zones. This is, perhaps, not surprising given this model doesn’t account for the non-linear relationships between predictors and the response variable or the spatial correlation. The spline model and neural network without spatial basis functions had a very similar performance and both seem to be effective approaches to modeling a non-linear relationship. However, this effectiveness was enhanced in both models by the addition of spatial basis functions. The best model was the neural network with the spatial basis functions, which validates the use of the model proposed in this analysis since it accounts for both the non-linear relationships and spatial correlation.</p>
</sec>
<sec id="j_nejsds71_s_010">
<label>3.3</label>
<title>Field-wide Irrigation Zone Delineation</title>
<p>The primary goal of the analysis was to obtain predictions of the irrigation zones for the whole field for use in the VRI system. The fitted spatial neural network model was used to make predictions of which irrigation zone a location should be classified in across the whole field. Due to Monte Carlo sampling, the uncertainty in the zone delineation was also able to be calculated. Figure <xref rid="j_nejsds71_fig_004">4</xref> shows the highest probability zone by location (top left) along with the probabilities of each location being assigned to each of the three zones. The predictions appear to be fairly spatially contiguous zones (as desired) with the exception of middle of the field in areas between zone 1 and 2.</p>
<fig id="j_nejsds71_fig_004">
<label>Figure 4</label>
<caption>
<p>Prediction map for the field as well as the uncertainty in the field. The prediction map is obtained by taking the zone with the highest probability for each location. The uncertainty in the predictions for each location and shown in the probability maps, which show the probability of each location being in each of the three zones.</p>
</caption>
<graphic xlink:href="nejsds71_g004.jpg"/>
</fig>
<p>While zone prediction map in Figure <xref rid="j_nejsds71_fig_004">4</xref> produces zones that are spatially smoothed by the spatial basis function, the covariates produce some non-contiguous regions that would be difficult to implement using variable rate irrigation given the current state of precision agriculture. Rather, for VRI implementation purposes, we desire to further smooth the predicted zones into purely contiguous zones. For purposes of VRI implementation, we use the spatial clustering algorithm of [<xref ref-type="bibr" rid="j_nejsds71_ref_019">19</xref>] based on finite differences to achieve more contiguous zones. Specifically, we spatially cluster using ward dissimilarity [see <xref ref-type="bibr" rid="j_nejsds71_ref_019">19</xref>, for details] based on the expected zone <disp-formula-group id="j_nejsds71_dg_001">
<disp-formula id="j_nejsds71_eq_009">
<label>(3.1)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mi mathvariant="italic">E</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="italic">Y</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
<mml:mtd class="align-even">
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>×</mml:mo>
<mml:mo fence="true" maxsize="1.19em" minsize="1.19em">[</mml:mo>
<mml:mi mathvariant="normal">Φ</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>−</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>+</mml:mo><mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo fence="true" maxsize="1.19em" minsize="1.19em">]</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:mspace width="1em"/>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>×</mml:mo>
<mml:mo fence="true" maxsize="1.19em" minsize="1.19em">[</mml:mo>
<mml:mi mathvariant="normal">Φ</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>+</mml:mo><mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}E\big(Y(\boldsymbol{s})\big)& =1\times \big[\Phi \big(0-\big({\widehat{f}_{L}}\big(\boldsymbol{X}(\boldsymbol{s})\big)+\widehat{w}(\boldsymbol{s})\big)\big)\big]\\ {} & \hspace{1em}+2\times \big[\Phi \big({\widehat{c}_{2}}-\big({\widehat{f}_{L}}\big(\boldsymbol{X}(\boldsymbol{s})\big)+\widehat{w}(\boldsymbol{s})\big)\big)\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_nejsds71_eq_010">
<label>(3.2)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:mspace width="1em"/>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="normal">Φ</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>−</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>+</mml:mo><mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo fence="true" maxsize="1.19em" minsize="1.19em">]</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="align-odd"/>
<mml:mtd class="align-even">
<mml:mspace width="1em"/>
<mml:mo>+</mml:mo>
<mml:mn>3</mml:mn>
<mml:mo>×</mml:mo>
<mml:mo fence="true" maxsize="1.19em" minsize="1.19em">[</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="normal">Φ</mml:mi>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>+</mml:mo><mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo fence="true" maxsize="1.19em" minsize="1.19em">]</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}& \hspace{1em}-\Phi \big(0-\big({\widehat{f}_{L}}\big(\boldsymbol{X}(\boldsymbol{s})\big)+\widehat{w}(\boldsymbol{s})\big)\big)\big]\\ {} & \hspace{1em}+3\times \big[1-\Phi \big({c_{2}}-\big({\widehat{f}_{L}}\big(\boldsymbol{X}(\boldsymbol{s})\big)+\widehat{w}(\boldsymbol{s})\big)\big)\big]\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group> where <inline-formula id="j_nejsds71_ineq_116"><alternatives><mml:math>
<mml:mi mathvariant="normal">Φ</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\Phi (\cdot )$]]></tex-math></alternatives></inline-formula> is the standard normal cumulative density function. The result from this clustering process is shown in Figure <xref rid="j_nejsds71_fig_005">5</xref> and compared with the original predictions. While there are still some discrepancies between the clustered zones and the original zones, they are relatively similar and the clustering provides contiguous zones that would be viable to be implemented with VRI. We note that clustering the model results (using any spatial clustering method) is not required but is simply beneficial for our application and the implementation of the estimated zones for the growing season.</p>
<fig id="j_nejsds71_fig_005">
<label>Figure 5</label>
<caption>
<p>Clustered irrigation zones based on expected zone. The clusters are relatively similar and follow the same general patterns as the predictions but could more reasonably be implemented as irrigation zones using VRI technology.</p>
</caption>
<graphic xlink:href="nejsds71_g005.jpg"/>
</fig>
</sec>
<sec id="j_nejsds71_s_011">
<label>3.4</label>
<title>Effect of Covariates</title>
<p>While prediction was the primary goal of the analysis, it may also be of interest to examine the effect of the covariates on zone classification. However, one of the drawbacks of using neural network models as we have here is that the parameters have no intuitive interpretation. Indeed, interpretability of neural networks, generally, is an open area of research [<xref ref-type="bibr" rid="j_nejsds71_ref_029">29</xref>]. One possible solution is to use partial dependence plots (PDPs) and feature importance to intuitively understand the effects of covariates on the response. First, partial dependence plots are used to show the marginal effect of a covariate on the predicted outcome. Mathematically, the partial dependence plot for a covariate, say <inline-formula id="j_nejsds71_ineq_117"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{p}}$]]></tex-math></alternatives></inline-formula>, is calculated as 
<disp-formula id="j_nejsds71_eq_011">
<label>(3.3)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mtext>PDP</mml:mtext>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mtd>
<mml:mtd class="align-even">
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
<mml:mo>+</mml:mo><mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[\begin{aligned}{}\text{PDP}({X_{p}})& =\frac{1}{n}{\sum \limits_{i=1}^{n}}\big({\widehat{f}_{L}}\big({X_{p}},{\boldsymbol{X}_{-p}}({\boldsymbol{s}_{i}})\big)+\widehat{w}({\boldsymbol{s}_{i}})\big)\end{aligned}\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_118"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${\boldsymbol{X}_{-p}}({\boldsymbol{s}_{i}})$]]></tex-math></alternatives></inline-formula> is the vector of covariates with <inline-formula id="j_nejsds71_ineq_119"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${X_{p}}({\boldsymbol{s}_{i}})$]]></tex-math></alternatives></inline-formula> removed and <inline-formula id="j_nejsds71_ineq_120"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${\widehat{f}_{L}}(\cdot )$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_nejsds71_ineq_121"><alternatives><mml:math><mml:mover accent="false">
<mml:mrow>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">ˆ</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\widehat{w}(\cdot )$]]></tex-math></alternatives></inline-formula> is our fitted spatial neural network model. The above is calculated for a grid of <inline-formula id="j_nejsds71_ineq_122"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${X_{p}}$]]></tex-math></alternatives></inline-formula> in the domain of <inline-formula id="j_nejsds71_ineq_123"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${X_{p}}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> to produce a curve representing the marginal effect of <inline-formula id="j_nejsds71_ineq_124"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${X_{p}}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula>. Intuitively, the partial dependence measure is calculated by replacing the variable of interest (<inline-formula id="j_nejsds71_ineq_125"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${X_{p}}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula>) with a fixed singleton value for all observations in the data and averaging the associated prediction across observations. Because we adopted the Bayesian approach, the uncertainty for these partial dependence plots are accounted for as well via Monte Carlo sampling.</p>
<p>Figure <xref rid="j_nejsds71_fig_006">6</xref> shows the partial dependence plots for some of the covariates that were used in the analysis (covariates with no effect were omitted for brevity) where the black line represents the posterior mean of the marginal effect for the covariate and the shaded region is the 95% credible interval for the marginal effect. Elevation, yield, and the NDVI for 2018 and 2019 seemed to heavily influence the probability of belonging to an irrigation zone. For example, as elevation increases, the probability of belonging to a lower-water zone increases (Zone 1 corresponds to the zone with the smallest VWC which, in turn, would get the most irrigation to compensate). The other covariates appear to have a minimal effect on the predicted zone.</p>
<fig id="j_nejsds71_fig_006">
<label>Figure 6</label>
<caption>
<p>Partial dependence plots for influential covariates. The black line shows the marginal effect for the variable and the shaded area is the 95% credible interval for the marginal effect.</p>
</caption>
<graphic xlink:href="nejsds71_g006.jpg"/>
</fig>
<p>Another common measure for covariate influence on machine learning models is feature importance. For this analysis, the permutation feature importance was used as the feature importance measure. The feature importance was calculated by permuting values of the feature (covariate) and calculating the ARI for the model with the permuted feature. The purpose of permuting the covariate is to break the relationship between the feature and the response. Under permutation, the prediction accuracy should decrease for features that are highly predictive of the response. After calculating the ARI for the model with the permuted feature, the feature importance for the <inline-formula id="j_nejsds71_ineq_126"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">th</mml:mi>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[${p^{\mathrm{th}}}$]]></tex-math></alternatives></inline-formula> covariate (<inline-formula id="j_nejsds71_ineq_127"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${X_{p}}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula>) is calculated as: 
<disp-formula id="j_nejsds71_eq_012">
<label>(3.4)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="right">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mi mathvariant="italic">F</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">A</mml:mi>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>orig</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:mi mathvariant="italic">A</mml:mi>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>perm</mml:mtext>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ F{I_{p}}=AR{I_{\text{orig}}}-AR{I_{\text{perm}}}\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_nejsds71_ineq_128"><alternatives><mml:math>
<mml:mi mathvariant="italic">A</mml:mi>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>orig</mml:mtext>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[$AR{I_{\text{orig}}}$]]></tex-math></alternatives></inline-formula> is the ARI for the model before permutation and <inline-formula id="j_nejsds71_ineq_129"><alternatives><mml:math>
<mml:mi mathvariant="italic">A</mml:mi>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>perm</mml:mtext>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[$AR{I_{\text{perm}}}$]]></tex-math></alternatives></inline-formula> is the ARI for the model after feature <inline-formula id="j_nejsds71_ineq_130"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[${X_{p}}(\boldsymbol{s})$]]></tex-math></alternatives></inline-formula> had been permuted. Intuitively, higher <inline-formula id="j_nejsds71_ineq_131"><alternatives><mml:math>
<mml:mi mathvariant="italic">F</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[$F{I_{p}}$]]></tex-math></alternatives></inline-formula> equates to more important features in determining the zone delineation.</p>
<p>The feature importance plot in Figure <xref rid="j_nejsds71_fig_007">7</xref> shows the feature importance for the 10 covariates that were included in the model. In correspondence with the partial dependence plots, the results from the feature importance plot also indicates that elevation, yield, and the NDVI index from 2018 and 2019 are important covariates in determining the irrigation zone. Hence, for other agricultural fields where irrigation zones are desired, collecting the highest importance features will provide the most effective zones for variable rate irrigation.</p>
<fig id="j_nejsds71_fig_007">
<label>Figure 7</label>
<caption>
<p>Feature importance for the covariates. Similar to the partial dependence plots, the most important features according to the importance measure is elevation, yield, and the NDVI for 2018 and 2019.</p>
</caption>
<graphic xlink:href="nejsds71_g007.jpg"/>
</fig>
</sec>
</sec>
<sec id="j_nejsds71_s_012">
<label>4</label>
<title>Conclusion</title>
<p>This analysis presents a Bayesian spatial neural network model with easily obtainable predictors such as elevation, slope, and past crop yields to be used for irrigation zone delineation. We propose this model as an alternative to the expensive and time consuming process of measuring volumetric water content. The model provides a fusion of statistical modeling and deep learning by harnessing the predictive ability of artificial neural networks, while quantifying uncertainty using Bayesian methods and using spatial modeling to capture spatial correlation in the irrigation zones. The analysis showed that for the field in Rexburg, Idaho, the most influential covariates for delineating irrigation zones were elevation, yield and the NDVI index.</p>
<p>There are some issues with the model presented that could be examined to improve the proposed method. As mentioned, the posterior mixing of the neural network weights was a challenge. This is to be expected given the non-identifiability of neural network weights as discussed in [<xref ref-type="bibr" rid="j_nejsds71_ref_010">10</xref>]. While we used efforts such as strong prior assumptions (the equivalent of penalization in traditional neural network model fitting), one possible solution to this is to use Hamiltonian Monte Carlo techniques which could integrate the backward-propagation algorithm but in a Bayesian posterior sampling paradigm.</p>
<p>There were 3 irrigation zones chosen for this analysis, but the performance of the model as the number of zones increase has yet to be studied and it is unknown if the model will perform as well when there are more than 3 irrigation zones. Further, perhaps more irrigation zones would increase the efficiency of the VRI technology. Future research could consider the effect of the number of zones on the fitted model.</p>
<p>In addition to the potential issues that should be further studied, there is potential future work that can be done based on the results from the analysis. First, the analysis and results of this model has only been applied to the one field in Rexburg. It would be beneficial to be able to use the model to delineate other fields. The model will be most beneficial when it is portable to other fields and scenarios. At the moment, the efficacy of this model is only known for the field of winter wheat in Rexburg.</p>
<p>In addition, to increase the portability of the model, it would be useful to consider other possible covariates that could be used to determine the volumetric water content. The covariates that were used in the analysis were provided and consequentially there may be other covariates that would be beneficial to delineating irrigation zones. For this analysis, we used ten different covariates and a number of them did not appear to be extremely important in determining the irrigation zone based on the partial dependence and feature importance plots. The impact of adding and subtracting covariates from the model could be examined further.</p>
<p>As mentioned in the Introduction, this research focuses on determining static water management zones. Alternatively, time-varying or dynamic zones could be created to alter the water within a growing season according to the needs of the plant [see <xref ref-type="bibr" rid="j_nejsds71_ref_013">13</xref>]. We note that our methods could be used as a foundation for determining dynamic zones but such an application would not only require within-season covariates such as daily precipitation but also require considerations of computational scalability which was less of a concern for this application.</p>
<p>Overall, the use of Bayesian spatial neural network models has the ability to create accurate irrigation zones from easily obtained data about a field without having to put in painstaking effort to determine the volumetric water content. As a result, these models could make the implementation of variable rate irrigation easier for farmers in agricultural fields.</p>
</sec>
<sec id="j_nejsds71_s_013">
<title>Declarations</title>
<p><bold>Data Availability.</bold> All code and data for this research is available at <uri>https://github.com/dteuscher1/Irrigation_NeuralNet</uri>.</p>
</sec>
</body>
<back>
<ref-list id="j_nejsds71_reflist_001">
<title>References</title>
<ref id="j_nejsds71_ref_001">
<label>[1]</label><mixed-citation publication-type="journal"><string-name><surname>Abiodun</surname>, <given-names>O. I.</given-names></string-name>, <string-name><surname>Jantan</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Omolara</surname>, <given-names>A. E.</given-names></string-name>, <string-name><surname>Dada</surname>, <given-names>K. V.</given-names></string-name>, <string-name><surname>Mohamed</surname>, <given-names>N. A.</given-names></string-name> and <string-name><surname>Arshad</surname>, <given-names>H.</given-names></string-name> (<year>2018</year>). <article-title>State-of-the-art in artificial neural network applications: A survey</article-title>. <source>Heliyon</source> <volume>4</volume>(<issue>11</issue>) <fpage>00938</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_002">
<label>[2]</label><mixed-citation publication-type="journal"><string-name><surname>Albert</surname>, <given-names>J. H.</given-names></string-name> and <string-name><surname>Chib</surname>, <given-names>S.</given-names></string-name> (<year>1993</year>). <article-title>Bayesian analysis of binary and polychotomous response data</article-title>. <source>Journal of the American statistical Association</source> <volume>88</volume>(<issue>422</issue>) <fpage>669</fpage>–<lpage>679</lpage>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=1224394">MR1224394</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_003">
<label>[3]</label><mixed-citation publication-type="chapter"><string-name><surname>Bahat</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Netzer</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Ben-Gal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Grünzweig</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Peeters</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Cohen</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <chapter-title>Comparison of water potential and yield parameters under uniform and variable rate drip irrigation in a cabernet sauvignon vineyard</chapter-title>. In <source>Precision Agriculture’19</source> <fpage>574</fpage>–<lpage>577</lpage> <publisher-name>Wageningen Academic Publishers</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_004">
<label>[4]</label><mixed-citation publication-type="journal"><string-name><surname>Berrett</surname>, <given-names>C.</given-names></string-name> and <string-name><surname>Calder</surname>, <given-names>C. A.</given-names></string-name> (<year>2016</year>). <article-title>Bayesian spatial binary classification</article-title>. <source>Spatial Statistics</source> <volume>16</volume> <fpage>72</fpage>–<lpage>102</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.spasta.2016.01.004" xlink:type="simple">https://doi.org/10.1016/j.spasta.2016.01.004</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3493089">MR3493089</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_005">
<label>[5]</label><mixed-citation publication-type="journal"><string-name><surname>Bhakta</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Phadikar</surname>, <given-names>S.</given-names></string-name> and <string-name><surname>Majumder</surname>, <given-names>K.</given-names></string-name> (<year>2019</year>). <article-title>State-of-the-art technologies in precision agriculture: a systematic review</article-title>. <source>Journal of the Science of Food and Agriculture</source> <volume>99</volume>(<issue>11</issue>) <fpage>4878</fpage>–<lpage>4888</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_006">
<label>[6]</label><mixed-citation publication-type="journal"><string-name><surname>Bornn</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Shaddick</surname>, <given-names>G.</given-names></string-name> and <string-name><surname>Zidek</surname>, <given-names>J. V.</given-names></string-name> (<year>2012</year>). <article-title>Modeling nonstationary processes through dimension expansion</article-title>. <source>Journal of the American Statistical Association</source> <volume>107</volume>(<issue>497</issue>) <fpage>281</fpage>–<lpage>289</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/01621459.2011.646919" xlink:type="simple">https://doi.org/10.1080/01621459.2011.646919</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=2949359">MR2949359</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_007">
<label>[7]</label><mixed-citation publication-type="journal"><string-name><surname>Chen</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Reich</surname>, <given-names>B. J.</given-names></string-name> and <string-name><surname>Sun</surname>, <given-names>Y.</given-names></string-name> (<year>2024</year>). <article-title>Deepkriging: Spatially dependent deep neural networks for spatial prediction</article-title>. <source>Statistica Sinica</source> <volume>34</volume> <fpage>291</fpage>–<lpage>311</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5705/ss.202021.0277" xlink:type="simple">https://doi.org/10.5705/ss.202021.0277</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=4683573">MR4683573</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_008">
<label>[8]</label><mixed-citation publication-type="journal"><string-name><surname>Cisternas</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Velásquez</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Caro</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Rodríguez</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Systematic literature review of implementations of precision agriculture</article-title>. <source>Computers and Electronics in Agriculture</source> <volume>176</volume> <fpage>105626</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_009">
<label>[9]</label><mixed-citation publication-type="chapter"><string-name><surname>Cohen</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Gogumalla</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Bahat</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Netzer</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Ben-Gal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Lenski</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Michael</surname>, <given-names>Y.</given-names></string-name> and <string-name><surname>Helman</surname>, <given-names>D.</given-names></string-name> (<year>2019</year>). <chapter-title>Can time series of multispectral satellite images be used to estimate stem water potential in vineyards?</chapter-title> In <source>Precision Agriculture’19</source> <fpage>1</fpage>–<lpage>5</lpage> <publisher-name>Wageningen Academic Publishers</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_010">
<label>[10]</label><mixed-citation publication-type="other"><string-name><surname>D’Amour</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Heller</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Moldovan</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Adlam</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Alipanahi</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Beutel</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Deaton</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Eisenstein</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Hoffman</surname>, <given-names>M. D.</given-names></string-name> et al. (2020). Underspecification presents challenges for credibility in modern machine learning. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2011.03395"><italic>2011.03395</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_011">
<label>[11]</label><mixed-citation publication-type="journal"><string-name><surname>Dixon</surname>, <given-names>M. F.</given-names></string-name>, <string-name><surname>Polson</surname>, <given-names>N. G.</given-names></string-name> and <string-name><surname>Sokolov</surname>, <given-names>V. O.</given-names></string-name> (<year>2019</year>). <article-title>Deep learning for spatio-temporal modeling: Dynamic traffic flows and high frequency trading</article-title>. <source>Applied Stochastic Models in Business and Industry</source> <volume>35</volume>(<issue>3</issue>) <fpage>788</fpage>–<lpage>807</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1002/asmb.2399" xlink:type="simple">https://doi.org/10.1002/asmb.2399</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3974250">MR3974250</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_012">
<label>[12]</label><mixed-citation publication-type="journal"><string-name><surname>Emmert-Streib</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Yang</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Feng</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Tripathi</surname>, <given-names>S.</given-names></string-name> and <string-name><surname>Dehmer</surname>, <given-names>M.</given-names></string-name> (<year>2020</year>). <article-title>An introductory review of deep learning for prediction models with big data</article-title>. <source>Frontiers in Artificial Intelligence</source> <volume>3</volume> <fpage>4</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_013">
<label>[13]</label><mixed-citation publication-type="journal"><string-name><surname>Fontanet</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Scudiero</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Skaggs</surname>, <given-names>T. H.</given-names></string-name>, <string-name><surname>Fernàndez-Garcia</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Ferrer</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Rodrigo</surname>, <given-names>G.</given-names></string-name> and <string-name><surname>Bellvert</surname>, <given-names>J.</given-names></string-name> (<year>2020</year>). <article-title>Dynamic management zones for irrigation scheduling</article-title>. <source>Agricultural Water Management</source> <volume>238</volume> <elocation-id>106207</elocation-id>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_014">
<label>[14]</label><mixed-citation publication-type="book"><string-name><surname>Friedman</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Hastie</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Tibshirani</surname>, <given-names>R.</given-names></string-name> <etal>et al.</etal> (<year>2001</year>) <source>The Elements of Statistical Learning</source>. <publisher-name>Springer Series in Statistics</publisher-name>. <publisher-name>Springer</publisher-name>, <publisher-loc>New York</publisher-loc>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/978-0-387-21606-5" xlink:type="simple">https://doi.org/10.1007/978-0-387-21606-5</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=1851606">MR1851606</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_015">
<label>[15]</label><mixed-citation publication-type="chapter"><string-name><surname>Gal</surname>, <given-names>Y.</given-names></string-name> and <string-name><surname>Ghahramani</surname>, <given-names>Z.</given-names></string-name> (<year>2016</year>). <chapter-title>Dropout as a bayesian approximation: Representing model uncertainty in deep learning</chapter-title>. In <source>International Conference on Machine Learning</source> <fpage>1050</fpage>–<lpage>1059</lpage>. <publisher-name>PMLR</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_016">
<label>[16]</label><mixed-citation publication-type="journal"><string-name><surname>Ghosh</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Maiti</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Chakraborty</surname>, <given-names>S.</given-names></string-name> and <string-name><surname>Tewari</surname>, <given-names>A.</given-names></string-name> (<year>2004</year>). <article-title>Hierarchical Bayesian neural networks: an application to a prostate cancer study</article-title>. <source>Journal of the American Statistical Association</source> <volume>99</volume>(<issue>467</issue>) <fpage>601</fpage>–<lpage>608</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1198/016214504000000665" xlink:type="simple">https://doi.org/10.1198/016214504000000665</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=2086386">MR2086386</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_017">
<label>[17]</label><mixed-citation publication-type="journal"><string-name><surname>Haario</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Saksman</surname>, <given-names>E.</given-names></string-name> and <string-name><surname>Tamminen</surname>, <given-names>J.</given-names></string-name> (<year>2001</year>). <article-title>An adaptive Metropolis algorithm</article-title>. <source>Bernoulli</source> <volume>7</volume>(<issue>2</issue>) <fpage>223</fpage>–<lpage>242</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.2307/3318737" xlink:type="simple">https://doi.org/10.2307/3318737</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=1828504">MR1828504</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_018">
<label>[18]</label><mixed-citation publication-type="journal"><string-name><surname>Haghverdi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Leib</surname>, <given-names>B. G.</given-names></string-name>, <string-name><surname>Washington-Allen</surname>, <given-names>R. A.</given-names></string-name>, <string-name><surname>Ayers</surname>, <given-names>P. D.</given-names></string-name> and <string-name><surname>Buschermohle</surname>, <given-names>M. J.</given-names></string-name> (<year>2015</year>). <article-title>Perspectives on delineating management zones for variable rate irrigation</article-title>. <source>Computers and Electronics in Agriculture</source> <volume>117</volume> <fpage>154</fpage>–<lpage>167</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_019">
<label>[19]</label><mixed-citation publication-type="journal"><string-name><surname>Heaton</surname>, <given-names>M. J.</given-names></string-name>, <string-name><surname>Christensen</surname>, <given-names>W. F.</given-names></string-name> and <string-name><surname>Terres</surname>, <given-names>M. A.</given-names></string-name> (<year>2017</year>). <article-title>Nonstationary Gaussian process models using spatial hierarchical clustering from finite differences</article-title>. <source>Technometrics</source> <volume>59</volume>(<issue>1</issue>) <fpage>93</fpage>–<lpage>101</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/00401706.2015.1102763" xlink:type="simple">https://doi.org/10.1080/00401706.2015.1102763</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3604192">MR3604192</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_020">
<label>[20]</label><mixed-citation publication-type="journal"><string-name><surname>Hedley</surname>, <given-names>C. B.</given-names></string-name> and <string-name><surname>Yule</surname>, <given-names>I. J.</given-names></string-name> (<year>2009</year>). <article-title>Soil water status mapping and two variable-rate irrigation scenarios</article-title>. <source>Precision Agriculture</source> <volume>10</volume>(<issue>4</issue>) <fpage>342</fpage>–<lpage>355</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_021">
<label>[21]</label><mixed-citation publication-type="journal"><string-name><surname>Helman</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Bahat</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Netzer</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Ben-Gal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Alchanatis</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Peeters</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Cohen</surname>, <given-names>Y.</given-names></string-name> (<year>2018</year>). <article-title>Using time series of high-resolution planet satellite images to monitor grapevine stem water potential in commercial vineyards</article-title>. <source>Remote Sensing</source> <volume>10</volume>(<issue>10</issue>) <fpage>1615</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_022">
<label>[22]</label><mixed-citation publication-type="journal"><string-name><surname>Higgs</surname>, <given-names>M. D.</given-names></string-name> and <string-name><surname>Hoeting</surname>, <given-names>J. A.</given-names></string-name> (<year>2010</year>). <article-title>A clipped latent variable model for spatially correlated ordered categorical data</article-title>. <source>Computational Statistics &amp; Data Analysis</source> <volume>54</volume>(<issue>8</issue>) <fpage>1999</fpage>–<lpage>2011</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.csda.2010.02.024" xlink:type="simple">https://doi.org/10.1016/j.csda.2010.02.024</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=2640303">MR2640303</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_023">
<label>[23]</label><mixed-citation publication-type="journal"><string-name><surname>Hughes</surname>, <given-names>J.</given-names></string-name> and <string-name><surname>Haran</surname>, <given-names>M.</given-names></string-name> (<year>2013</year>). <article-title>Dimension reduction and alleviation of confounding for spatial generalized linear mixed models</article-title>. <source>Journal of the Royal Statistical Society: Series B (Statistical Methodology)</source> <volume>75</volume>(<issue>1</issue>) <fpage>139</fpage>–<lpage>159</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1111/j.1467-9868.2012.01041.x" xlink:type="simple">https://doi.org/10.1111/j.1467-9868.2012.01041.x</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3008275">MR3008275</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_024">
<label>[24]</label><mixed-citation publication-type="journal"><string-name><surname>Jung</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Maeda</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Chang</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Bhandari</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Ashapure</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Landivar-Bowles</surname>, <given-names>J.</given-names></string-name> (<year>2021</year>). <article-title>The potential of remote sensing and artificial intelligence as tools to improve the resilience of agriculture production systems</article-title>. <source>Current Opinion in Biotechnology</source> <volume>70</volume> <fpage>15</fpage>–<lpage>22</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_025">
<label>[25]</label><mixed-citation publication-type="chapter"><string-name><surname>Katz</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Naor</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Litaor</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Ben-Gal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Alchanatis</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Peres</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Peeters</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Cohen</surname>, <given-names>Y.</given-names></string-name> (<year>2021</year>). <chapter-title>Methodology for comparison between uniform and variable rate application in a drip-irrigated peach orchard</chapter-title>. In <source>Precision Agriculture’21</source> <fpage>823</fpage>–<lpage>842</lpage> <publisher-name>Wageningen Academic Publishers</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_026">
<label>[26]</label><mixed-citation publication-type="journal"><string-name><surname>Khanal</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Fulton</surname>, <given-names>J.</given-names></string-name> and <string-name><surname>Shearer</surname>, <given-names>S.</given-names></string-name> (<year>2017</year>). <article-title>An overview of current and potential applications of thermal remote sensing in precision agriculture</article-title>. <source>Computers and Electronics in Agriculture</source> <volume>139</volume> <fpage>22</fpage>–<lpage>32</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_027">
<label>[27]</label><mixed-citation publication-type="chapter"><string-name><surname>Klute</surname>, <given-names>A.</given-names></string-name> (<year>1986</year>). <chapter-title>Water retention: laboratory methods</chapter-title>. In <source>Methods of Soil Analysis, Part 1: Physical and Mineralogical Methods</source> <volume>5</volume> <fpage>635</fpage>–<lpage>662</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_028">
<label>[28]</label><mixed-citation publication-type="other"><string-name><surname>Lee</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Bahri</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Novak</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Schoenholz</surname>, <given-names>S. S.</given-names></string-name>, <string-name><surname>Pennington</surname>, <given-names>J.</given-names></string-name> and <string-name><surname>Sohl-Dickstein</surname>, <given-names>J.</given-names></string-name> (2017). Deep neural networks as gaussian processes. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1711.00165"><italic>1711.00165</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_029">
<label>[29]</label><mixed-citation publication-type="journal"><string-name><surname>Linardatos</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Papastefanopoulos</surname>, <given-names>V.</given-names></string-name> and <string-name><surname>Kotsiantis</surname>, <given-names>S.</given-names></string-name> (<year>2020</year>). <article-title>Explainable AI: A review of machine learning interpretability methods</article-title>. <source>Entropy</source> <volume>23</volume>(<issue>1</issue>) <fpage>18</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_030">
<label>[30]</label><mixed-citation publication-type="journal"><string-name><surname>Lindblom</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Lundström</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ljung</surname>, <given-names>M.</given-names></string-name> and <string-name><surname>Jonsson</surname>, <given-names>A.</given-names></string-name> (<year>2017</year>). <article-title>Promoting sustainable intensification in precision agriculture: review of decision support systems development and strategies</article-title>. <source>Precision Agriculture</source> <volume>18</volume>(<issue>3</issue>) <fpage>309</fpage>–<lpage>331</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_031">
<label>[31]</label><mixed-citation publication-type="journal"><string-name><surname>Lo</surname>, <given-names>T. H.</given-names></string-name>, <string-name><surname>Heeren</surname>, <given-names>D. M.</given-names></string-name>, <string-name><surname>Mateos</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Luck</surname>, <given-names>J. D.</given-names></string-name>, <string-name><surname>Martin</surname>, <given-names>D. L.</given-names></string-name>, <string-name><surname>Miller</surname>, <given-names>K. A.</given-names></string-name>, <string-name><surname>Barker</surname>, <given-names>J. B.</given-names></string-name> and <string-name><surname>Shaver</surname>, <given-names>T. M.</given-names></string-name> (<year>2017</year>). <article-title>Field characterization of field capacity and root zone available water capacity for variable rate irrigation</article-title>. <source>Applied Engineering in Agriculture</source> <volume>33</volume>(<issue>4</issue>) <fpage>559</fpage>–<lpage>572</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_032">
<label>[32]</label><mixed-citation publication-type="journal"><string-name><surname>Loures</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Chamizo</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ferreira</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Loures</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Castanho</surname>, <given-names>R.</given-names></string-name> and <string-name><surname>Panagopoulos</surname>, <given-names>T.</given-names></string-name> (<year>2020</year>). <article-title>Assessing the effectiveness of precision agriculture management systems in Mediterranean small farms</article-title>. <source>Sustainability</source> <volume>12</volume>(<issue>9</issue>) <fpage>3765</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_033">
<label>[33]</label><mixed-citation publication-type="other"><string-name><surname>Matthews</surname>, <given-names>A. G. d. G.</given-names></string-name>, <string-name><surname>Rowland</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hron</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Turner</surname>, <given-names>R. E.</given-names></string-name> and <string-name><surname>Ghahramani</surname>, <given-names>Z.</given-names></string-name> (2018). Gaussian process behaviour in wide deep neural networks. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1804.11271"><italic>1804.11271</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_034">
<label>[34]</label><mixed-citation publication-type="journal"><string-name><surname>McDermott</surname>, <given-names>P. L.</given-names></string-name> and <string-name><surname>Wikle</surname>, <given-names>C. K.</given-names></string-name> (<year>2019</year>). <article-title>Bayesian recurrent neural network models for forecasting and quantifying uncertainty in spatial-temporal data</article-title>. <source>Entropy</source> <volume>21</volume>(<issue>2</issue>) <fpage>184</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/e21020184" xlink:type="simple">https://doi.org/10.3390/e21020184</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3923929">MR3923929</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_035">
<label>[35]</label><mixed-citation publication-type="book"><string-name><surname>Neal</surname>, <given-names>R. M.</given-names></string-name> (<year>2012</year>) <source>Bayesian Learning for Neural Networks</source> <volume>118</volume>. <publisher-name>Springer Science &amp; Business Media</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_036">
<label>[36]</label><mixed-citation publication-type="other"><string-name><surname>Nwankpa</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ijomah</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Gachagan</surname>, <given-names>A.</given-names></string-name> and <string-name><surname>Marshall</surname>, <given-names>S.</given-names></string-name> (2018). Activation functions: Comparison of trends in practice and research for deep learning. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1811.03378"><italic>1811.03378</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_037">
<label>[37]</label><mixed-citation publication-type="journal"><string-name><surname>Ohana-Levi</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Ben-Gal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Peeters</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Termin</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Linker</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Baram</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Raveh</surname>, <given-names>E.</given-names></string-name> and <string-name><surname>Paz-Kagan</surname>, <given-names>T.</given-names></string-name> (<year>2021</year>). <article-title>A comparison between spatial clustering models for determining N-fertilization management zones in orchards</article-title>. <source>Precision Agriculture</source> <volume>22</volume>(<issue>1</issue>) <fpage>99</fpage>–<lpage>123</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_038">
<label>[38]</label><mixed-citation publication-type="journal"><string-name><surname>Ohana-Levi</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Bahat</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Peeters</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Shtein</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Netzer</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Cohen</surname>, <given-names>Y.</given-names></string-name> and <string-name><surname>Ben-Gal</surname>, <given-names>A.</given-names></string-name> (<year>2019</year>). <article-title>A weighted multivariate spatial clustering model to determine irrigation management zones</article-title>. <source>Computers and Electronics in Agriculture</source> <volume>162</volume> <fpage>719</fpage>–<lpage>731</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_039">
<label>[39]</label><mixed-citation publication-type="chapter"><string-name><surname>O’Shaughnessy</surname>, <given-names>S. A.</given-names></string-name>, <string-name><surname>Evett</surname>, <given-names>S. R.</given-names></string-name>, <string-name><surname>Andrade</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Workneh</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Price</surname>, <given-names>J. A.</given-names></string-name> and <string-name><surname>Rush</surname>, <given-names>C. M.</given-names></string-name> (<year>2015</year>). <chapter-title>Site-specific variable rate irrigation as a means to enhance water use efficiency</chapter-title>. In <source>2015 ASABE/IA Irrigation Symposium: Emerging Technologies for Sustainable Irrigation – A Tribute to the Career of Terry Howell, Sr. Conference Proceedings</source> <fpage>1</fpage>–<lpage>21</lpage>. <publisher-name>American Society of Agricultural and Biological Engineers</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_040">
<label>[40]</label><mixed-citation publication-type="journal"><string-name><surname>O’Shaughnessy</surname>, <given-names>S. A.</given-names></string-name>, <string-name><surname>Evett</surname>, <given-names>S. R.</given-names></string-name>, <string-name><surname>Colaizzi</surname>, <given-names>P. D.</given-names></string-name>, <string-name><surname>Andrade</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Marek</surname>, <given-names>T. H.</given-names></string-name>, <string-name><surname>Heeren</surname>, <given-names>D. M.</given-names></string-name>, <string-name><surname>Lamm</surname>, <given-names>F. R.</given-names></string-name> and <string-name><surname>LaRue</surname>, <given-names>J. L.</given-names></string-name> (<year>2019</year>). <article-title>Identifying advantages and disadvantages of variable rate irrigation: An updated review</article-title>. <source>Applied Engineering in Agriculture</source> <volume>35</volume>(<issue>6</issue>) <fpage>837</fpage>–<lpage>852</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_041">
<label>[41]</label><mixed-citation publication-type="journal"><string-name><surname>Pang</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Shen</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Cao</surname>, <given-names>L.</given-names></string-name> and <string-name><surname>Hengel</surname>, <given-names>A. V. D.</given-names></string-name> (<year>2021</year>). <article-title>Deep learning for anomaly detection: A review</article-title>. <source>ACM Computing Surveys (CSUR)</source> <volume>54</volume>(<issue>2</issue>) <fpage>1</fpage>–<lpage>38</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_042">
<label>[42]</label><mixed-citation publication-type="journal"><string-name><surname>Perea</surname>, <given-names>R. G.</given-names></string-name>, <string-name><surname>Daccache</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Díaz</surname>, <given-names>J. R.</given-names></string-name>, <string-name><surname>Poyato</surname>, <given-names>E. C.</given-names></string-name> and <string-name><surname>Knox</surname>, <given-names>J. W.</given-names></string-name> (<year>2018</year>). <article-title>Modelling impacts of precision irrigation on crop yield and in-field water management</article-title>. <source>Precision Agriculture</source> <volume>19</volume>(<issue>3</issue>) <fpage>497</fpage>–<lpage>512</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_043">
<label>[43]</label><mixed-citation publication-type="journal"><string-name><surname>Polson</surname>, <given-names>N. G.</given-names></string-name> and <string-name><surname>Sokolov</surname>, <given-names>V.</given-names></string-name> (<year>2019</year>). <article-title>Bayesian regularization: From Tikhonov to horseshoe</article-title>. <source>Wiley Interdisciplinary Reviews: Computational Statistics</source> <volume>11</volume>(<issue>4</issue>) <fpage>1463</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1002/wics.1463" xlink:type="simple">https://doi.org/10.1002/wics.1463</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3999526">MR3999526</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_044">
<label>[44]</label><mixed-citation publication-type="other"><string-name><surname>Ramachandran</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Zoph</surname>, <given-names>B.</given-names></string-name> and <string-name><surname>Le</surname>, <given-names>Q. V.</given-names></string-name> (2017). Searching for activation functions. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1710.05941"><italic>1710.05941</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_045">
<label>[45]</label><mixed-citation publication-type="journal"><string-name><surname>Saha</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Basu</surname>, <given-names>S.</given-names></string-name> and <string-name><surname>Datta</surname>, <given-names>A.</given-names></string-name> (<year>2023</year>). <article-title>Random forests for spatially dependent data</article-title>. <source>Journal of the American Statistical Association</source> <volume>118</volume>(<issue>541</issue>) <fpage>665</fpage>–<lpage>683</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/01621459.2021.1950003" xlink:type="simple">https://doi.org/10.1080/01621459.2021.1950003</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=4571149">MR4571149</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_046">
<label>[46]</label><mixed-citation publication-type="other"><string-name><surname>Sauer</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Gramacy</surname>, <given-names>R. B.</given-names></string-name> and <string-name><surname>Higdon</surname>, <given-names>D.</given-names></string-name> (2020). Active learning for deep Gaussian process surrogates. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2012.08015"><italic>2012.08015</italic></ext-link>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/00401706.2021.2008505" xlink:type="simple">https://doi.org/10.1080/00401706.2021.2008505</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=4543056">MR4543056</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_047">
<label>[47]</label><mixed-citation publication-type="journal"><string-name><surname>Shand</surname>, <given-names>L.</given-names></string-name> and <string-name><surname>Li</surname>, <given-names>B.</given-names></string-name> (<year>2017</year>). <article-title>Modeling nonstationarity in space and time</article-title>. <source>Biometrics</source> <volume>73</volume>(<issue>3</issue>) <fpage>759</fpage>–<lpage>768</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1111/biom.12656" xlink:type="simple">https://doi.org/10.1111/biom.12656</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3713110">MR3713110</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_048">
<label>[48]</label><mixed-citation publication-type="journal"><string-name><surname>Sigrist</surname>, <given-names>F.</given-names></string-name> (<year>2022</year>). <article-title>Gaussian process boosting</article-title>. <source>The Journal of Machine Learning Research</source> <volume>23</volume>(<issue>1</issue>) <fpage>10565</fpage>–<lpage>10610</lpage>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=4577671">MR4577671</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_049">
<label>[49]</label><mixed-citation publication-type="journal"><string-name><surname>Sit</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Demiray</surname>, <given-names>B. Z.</given-names></string-name>, <string-name><surname>Xiang</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Ewing</surname>, <given-names>G. J.</given-names></string-name>, <string-name><surname>Sermet</surname>, <given-names>Y.</given-names></string-name> and <string-name><surname>Demir</surname>, <given-names>I.</given-names></string-name> (<year>2020</year>). <article-title>A comprehensive review of deep learning applications in hydrology and water resources</article-title>. <source>Water Science and Technology</source> <volume>82</volume>(<issue>12</issue>) <fpage>2635</fpage>–<lpage>2670</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_050">
<label>[50]</label><mixed-citation publication-type="book"><string-name><surname>Steven</surname>, <given-names>M.</given-names></string-name> and <string-name><surname>Clark</surname>, <given-names>J. A.</given-names></string-name> (<year>2013</year>) <source>Applications of Remote Sensing in Agriculture</source>. <publisher-name>Elsevier</publisher-name>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_051">
<label>[51]</label><mixed-citation publication-type="journal"><string-name><surname>Tibshirani</surname>, <given-names>R.</given-names></string-name> (<year>1996</year>). <article-title>Regression shrinkage and selection via the lasso</article-title>. <source>Journal of the Royal Statistical Society: Series B (Methodological)</source> <volume>58</volume>(<issue>1</issue>) <fpage>267</fpage>–<lpage>288</lpage>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=1379242">MR1379242</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_052">
<label>[52]</label><mixed-citation publication-type="journal"><string-name><surname>Tran</surname>, <given-names>M. -N.</given-names></string-name>, <string-name><surname>Nguyen</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Nott</surname>, <given-names>D.</given-names></string-name> and <string-name><surname>Kohn</surname>, <given-names>R.</given-names></string-name> (<year>2020</year>). <article-title>Bayesian deep net GLM and GLMM</article-title>. <source>Journal of Computational and Graphical Statistics</source> <volume>29</volume>(<issue>1</issue>) <fpage>97</fpage>–<lpage>113</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10618600.2019.1637747" xlink:type="simple">https://doi.org/10.1080/10618600.2019.1637747</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=4085867">MR4085867</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_053">
<label>[53]</label><mixed-citation publication-type="book"><string-name><surname>Wani</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Bhat</surname>, <given-names>F. A.</given-names></string-name>, <string-name><surname>Afzal</surname>, <given-names>S.</given-names></string-name> and <string-name><surname>Khan</surname>, <given-names>A. I.</given-names></string-name> (<year>2020</year>) <source>Advances in Deep Learning</source>. <publisher-name>Springer</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/978-981-13-6794-6" xlink:type="simple">https://doi.org/10.1007/978-981-13-6794-6</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3966423">MR3966423</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_054">
<label>[54]</label><mixed-citation publication-type="journal"><string-name><surname>Wikle</surname>, <given-names>C. K.</given-names></string-name> (<year>2019</year>). <article-title>Comparison of deep neural networks and deep hierarchical models for spatio-temporal data</article-title>. <source>Journal of Agricultural, Biological and Environmental Statistics</source> <volume>24</volume>(<issue>2</issue>) <fpage>175</fpage>–<lpage>203</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s13253-019-00361-7" xlink:type="simple">https://doi.org/10.1007/s13253-019-00361-7</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=3945276">MR3945276</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_055">
<label>[55]</label><mixed-citation publication-type="chapter"><string-name><surname>Williams</surname>, <given-names>C. K.</given-names></string-name> (<year>1997</year>). <chapter-title>Computing with infinite networks</chapter-title>. In <source>Advances in Neural Information Processing Systems</source> <fpage>295</fpage>–<lpage>301</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_056">
<label>[56]</label><mixed-citation publication-type="journal"><string-name><surname>Wójtowicz</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Wójtowicz</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Piekarczyk</surname>, <given-names>J.</given-names></string-name> <etal>et al.</etal> (<year>2016</year>). <article-title>Application of remote sensing methods in agriculture</article-title>. <source>Communications in Biometry and Crop Science</source> <volume>11</volume>(<issue>1</issue>) <fpage>31</fpage>–<lpage>50</lpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_057">
<label>[57]</label><mixed-citation publication-type="journal"><string-name><surname>Yari</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Madramootoo</surname>, <given-names>C. A.</given-names></string-name>, <string-name><surname>Woods</surname>, <given-names>S. A.</given-names></string-name>, <string-name><surname>Adamchuk</surname>, <given-names>V. I.</given-names></string-name> and <string-name><surname>Huang</surname>, <given-names>H. -H.</given-names></string-name> (<year>2017</year>). <article-title>Assessment of field spatial and temporal variabilities to delineate site-specific management zones for variable-rate irrigation</article-title>. <source>Journal of Irrigation and Drainage Engineering</source> <volume>143</volume>(<issue>9</issue>) <fpage>04017037</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_058">
<label>[58]</label><mixed-citation publication-type="other"><string-name><surname>Zammit-Mangion</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Kaminski</surname>, <given-names>M. D.</given-names></string-name>, <string-name><surname>Tran</surname>, <given-names>B. -H.</given-names></string-name>, <string-name><surname>Filippone</surname>, <given-names>M.</given-names></string-name> and <string-name><surname>Cressie</surname>, <given-names>N.</given-names></string-name> (2023). Spatial Bayesian neural networks. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2311.09491"><italic>2311.09491</italic></ext-link>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.spasta.2024.100825" xlink:type="simple">https://doi.org/10.1016/j.spasta.2024.100825</ext-link>. <ext-link ext-link-type="uri" xlink:href="https://mathscinet.ams.org/mathscinet-getitem?mr=4731204">MR4731204</ext-link></mixed-citation>
</ref>
<ref id="j_nejsds71_ref_059">
<label>[59]</label><mixed-citation publication-type="other"><string-name><surname>Zhan</surname>, <given-names>W.</given-names></string-name> and <string-name><surname>Datta</surname>, <given-names>A.</given-names></string-name> (2023). Neural networks for geospatial data. <italic>arXiv preprint arXiv:</italic><ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2304.09157"><italic>2304.09157</italic></ext-link>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_060">
<label>[60]</label><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Guan</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Peng</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Jiang</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Zhou</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Yang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Pan</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Franz</surname>, <given-names>T. E.</given-names></string-name>, <string-name><surname>Heeren</surname>, <given-names>D. M.</given-names></string-name>, <string-name><surname>Rudnick</surname>, <given-names>D. R.</given-names></string-name> <etal>et al.</etal> (<year>2021</year>). <article-title>Challenges and opportunities in precision irrigation decision-support systems for center pivots</article-title>. <source>Environmental Research Letters</source> <volume>16</volume>(<issue>5</issue>) <fpage>053003</fpage>.</mixed-citation>
</ref>
<ref id="j_nejsds71_ref_061">
<label>[61]</label><mixed-citation publication-type="journal"><string-name><surname>Zhao</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Yang</surname>, <given-names>R.</given-names></string-name> and <string-name><surname>Li</surname>, <given-names>Y.</given-names></string-name> (<year>2017</year>). <article-title>Crop yield and water productivity responses in management zones for variable-rate irrigation based on available soil water holding capacity</article-title>. <source>Transactions of the ASABE</source> <volume>60</volume>(<issue>5</issue>) <fpage>1659</fpage>–<lpage>1667</lpage>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>
