243
edits
ElenaZhivun (talk | contribs) |
ElenaZhivun (talk | contribs) |
||
I, Mike and Zak have discussed what might be wrong with the LabVIEW code. We have identified the following issues:
# <s>In the demodulation circuit diagram: two 16-bit integers are multiplied and produce a 32-bit integer, which is then scaled back to 16 bit. The original v16 program scales is by a factor of 2^-14, while it really should be 2^-16, to pick out the high word of the product.</s> It should be 2^-15, and another factor of 2 seems to come from accounting for the demodulation with a sine, maybe because the integral of sin^2(x) is 1/2 ?
# In the data sampling: the FPGA always acquires the data at 500ksps. The data is demodulated and filtered at 250ksps, and both these frequencies are hardwired. Sampling rate set by the host VI only sets the rate at which the data is send to the computer. This rate is smaller than the data acquisition rate, and the VI seems to discard the other samples, rather than average them.
# The ADC with the 16-bit resolution and ±10V input range is only sufficient to resolve ~10-20 fT at 15 μV/fT gain
|