When writing to program memory from code, it is necessary to be aware of the fact that writes happens in blocks. You cannot write a single word/instruction, instead you write what is called a row or block.
I have not tested this i practice, but this datasheet gives us the row size for the PIC16F18325 when writing through the ICP - presumably it's the same for programmatically written data as well:
"When write and erase operations are done on a row basis, the row size (number of 14-bit words) for erase operation is 32 and the row size (number of 14-bit latches) for the write operation is 32"
As each word takes up 14bits, you have to write two bytes to fill it (the two MSB are ignored). Thus, rows are 32 words but 64bytes long when dealt with from code.
PS: Before doing a write, you have to erase the memory you want to write to. Erasing flips all bits to 1 whereas writing can only flip bits to 0. If bits are not 1 when writing starts, it will not be possible to get a 1 after writing either.
Friday, August 18, 2017
Program memory size on MCUs
Funny thing: In most MCU datasheets, the program memory size is listed in KB. The PIC16F18325 for example is said to have 14KB of program memory.
Don't let this fool you! It does not mean that you can store 14KB of data/constants.
The problem is, each word (instruction) on the PIC16 is 14bits long. Storing an 8bit constant in program memory (flash/ROM) takes up 14bits of space.
So how big is the memory really? That's easy: 14KB = 14 * 1024 * 8 = 114688bits, divided by 14bits per word = 8192. Thus, the "real" memory size is 8KB - you can store 8K instructions or constants.
Don't let this fool you! It does not mean that you can store 14KB of data/constants.
The problem is, each word (instruction) on the PIC16 is 14bits long. Storing an 8bit constant in program memory (flash/ROM) takes up 14bits of space.
So how big is the memory really? That's easy: 14KB = 14 * 1024 * 8 = 114688bits, divided by 14bits per word = 8192. Thus, the "real" memory size is 8KB - you can store 8K instructions or constants.
Friday, August 4, 2017
DCO: Culprit found but not fixed
I did extensive testing of various op amps yesterday, and as suspected, the amplitude error at low frequencies is caused by the op amp. I also tried recalibrating the power supply but that had no effect whatsoever.
The results from various opamps varied wildly, and results from positive and inverting buffering configs also varied a lot.
My best hit was with a UA741, in inverting mode the amplitude was perfect across the entire frequency range! Unfortunately, it was only a lucky strike, I retried several other UA741s and the result varied from around 7 to around 13V amplitude (when the real amplitude should be 10V).
The conclusion then is that at voltages very close to zero will experience a lot of variation even between specimens of the same family.
I am not sure how the effects are over time and temperature, this must be tested.
If the effects are mostly production variations, it would be possible to calibrate each DCO.
This can be done in code even if DAC lookup tables must reside in program memory due to RAM limitations (1kB of memory is required for DAC keystep and rise-per-substep tables and the MCU only has 1kB of RAM in total). The PIC16F allows programmatical/runtime writes to flash program memory, and though there is a limit to the number of times this can be done (>10 000), even a write on every system startup would probably be possible.
Hopefully though, calibration will seldom be necessary. As a nice side effect, calibration will correct variances in both charging caps and resistors as well as opamps.
Calibration
Here is how I imagine it to be done:
A reference voltage is applied to one of the MCU pins. This is used as the reference voltage of one of the internal comparators.
First, apply the lowest frequency. By increasing and lowering the DAC output voltage until the comparator changes polarity, we figure out
- if the voltage is too high or too low
- if the amplitude error is within acceptable limits
A slightly too low amplitude is acceptable but a too high amplitude is not as this will trigger the comparator during normal operations, if we choose to use it as a reset-on-frequency-change trigger. The amplitude must also be adjusted to always be slightly lower than the comparator trigger point to prevent false triggers.
We also have to consider measuring the highest frequency amplitude error to see if errors are always either high or low. For now, I'll assume that they are always one or the other.
After finding the initial error, one loops through the remaining 255 samples, starting from the bottom.
For every frequency, the DAC voltage is increased or lowered (based on what we found for the lowest frequency) until the comparator changes polarity - this gives us the "correct" value for that step. To save time, we may stop checking once we reach a frequency where the error is small enough (and lower than the comparator voltage).
We should now have a correct lookup table for key samples. All that is needed now is to calculate the interpolation lookup table, and write everything to non-volatile flash memory.
To minimise the number of recalibrations, we could let the DCO check the keysamples on startup. If they are still within limits, no recalibration is necessary.
A different approach to finding the current amplitude would be letting the DCO control a PWM-based DAC to generate the reference voltage. It could then change the reference voltage instead of DAC output value to find the amplitude. Not sure that it's a better approach though.
PS: If the signal amplitude is 10V, using a resistor divider with R1 = 100k and R2 = 68k will get the amplitude down to 4.048V. This lets us use the internal 4.096V voltage reference with the comparator.
The results from various opamps varied wildly, and results from positive and inverting buffering configs also varied a lot.
My best hit was with a UA741, in inverting mode the amplitude was perfect across the entire frequency range! Unfortunately, it was only a lucky strike, I retried several other UA741s and the result varied from around 7 to around 13V amplitude (when the real amplitude should be 10V).
The conclusion then is that at voltages very close to zero will experience a lot of variation even between specimens of the same family.
Tested op amps |
UA741 variations |
I am not sure how the effects are over time and temperature, this must be tested.
If the effects are mostly production variations, it would be possible to calibrate each DCO.
This can be done in code even if DAC lookup tables must reside in program memory due to RAM limitations (1kB of memory is required for DAC keystep and rise-per-substep tables and the MCU only has 1kB of RAM in total). The PIC16F allows programmatical/runtime writes to flash program memory, and though there is a limit to the number of times this can be done (>10 000), even a write on every system startup would probably be possible.
Hopefully though, calibration will seldom be necessary. As a nice side effect, calibration will correct variances in both charging caps and resistors as well as opamps.
Calibration
Here is how I imagine it to be done:
A reference voltage is applied to one of the MCU pins. This is used as the reference voltage of one of the internal comparators.
First, apply the lowest frequency. By increasing and lowering the DAC output voltage until the comparator changes polarity, we figure out
- if the voltage is too high or too low
- if the amplitude error is within acceptable limits
A slightly too low amplitude is acceptable but a too high amplitude is not as this will trigger the comparator during normal operations, if we choose to use it as a reset-on-frequency-change trigger. The amplitude must also be adjusted to always be slightly lower than the comparator trigger point to prevent false triggers.
We also have to consider measuring the highest frequency amplitude error to see if errors are always either high or low. For now, I'll assume that they are always one or the other.
After finding the initial error, one loops through the remaining 255 samples, starting from the bottom.
For every frequency, the DAC voltage is increased or lowered (based on what we found for the lowest frequency) until the comparator changes polarity - this gives us the "correct" value for that step. To save time, we may stop checking once we reach a frequency where the error is small enough (and lower than the comparator voltage).
We should now have a correct lookup table for key samples. All that is needed now is to calculate the interpolation lookup table, and write everything to non-volatile flash memory.
To minimise the number of recalibrations, we could let the DCO check the keysamples on startup. If they are still within limits, no recalibration is necessary.
A different approach to finding the current amplitude would be letting the DCO control a PWM-based DAC to generate the reference voltage. It could then change the reference voltage instead of DAC output value to find the amplitude. Not sure that it's a better approach though.
PS: If the signal amplitude is 10V, using a resistor divider with R1 = 100k and R2 = 68k will get the amplitude down to 4.048V. This lets us use the internal 4.096V voltage reference with the comparator.
Wednesday, August 2, 2017
DCO: Switching to JFET to try to improve low frequency amplitude
To try to fix the issue I'm having where the saw wave amplitude is too low at low frequencies, I decided to redesign the core using a JFET in place of the BJT that resets the integrator.
I found no p-channel JFET in my parts box, but I had plenty of the J112 n-channel JFETs, so to make things easier I decided to go with the yusynth design.
The Yusynth design however, has a 0-5V saw wave, whereas mine is 0 to -10V. To make sure things would still work, I changed the core ever so slightly to get a 0-10V:
instead of tapping the charge voltage directly from the DAC using a positive opamp buffer, I switched to a unity gain negative amplifier. This would sink current instead of sourcing it, changing the charging direction. To make this works one also have to replace the 2n3906 PNP transistor with a 2n3904 NPN (One should also switch the polarity of the timer output, but as both a positive and negative going spike are generated, just slightly offset in time, this was not required for testing).
But testing this, I got a big surprise - the low frequency amplitude was no longer too low - it was too high! Previously I had to increase the DAC value from 60 to 80-something, now I had to reduce it from 60 to 48 (steps times 5V/65536).
This made me less certain that switching to a JFET would improve anything, but I still decided to try it.
I added an LM311 comparator, and set its negative input to 0.118V using a 120k and a 4k7 resistor. This would assure that when the positive input was just slightly higher than 0V, the output would spike up to 15V, and when the input was 0V the output would be -15V - similarly to the yusynth circuit, where the comparator outputs a negative voltage to turn off the JFET. The circuit worked instantly (!), but as suspected, nothing changed.
So, now I guess I've ruled out the reset transistor as the cause of the offset. Also, the fact that the amplitude error changes when I swap the charging polarity, makes me believe that the cap is not at fault either (though I will still check this).
That leaves either the DAC (which is unlikely for the same reason as the CAP) or the opamp buffer.
It could also be that a small difference in the power lines (measured to +15.01 and -15.00 volts) could cause this, I don't know. I will try recalibrating and also try different opamps to see if that changes anything.
For reference: Here is the original breadboarded circuit with the 0 to -10V output. I have since added the missing 2R2 resistor, however, that changed nothing. The DAC is connected where the 20k pot is in this drawing
I found no p-channel JFET in my parts box, but I had plenty of the J112 n-channel JFETs, so to make things easier I decided to go with the yusynth design.
The Yusynth design however, has a 0-5V saw wave, whereas mine is 0 to -10V. To make sure things would still work, I changed the core ever so slightly to get a 0-10V:
instead of tapping the charge voltage directly from the DAC using a positive opamp buffer, I switched to a unity gain negative amplifier. This would sink current instead of sourcing it, changing the charging direction. To make this works one also have to replace the 2n3906 PNP transistor with a 2n3904 NPN (One should also switch the polarity of the timer output, but as both a positive and negative going spike are generated, just slightly offset in time, this was not required for testing).
But testing this, I got a big surprise - the low frequency amplitude was no longer too low - it was too high! Previously I had to increase the DAC value from 60 to 80-something, now I had to reduce it from 60 to 48 (steps times 5V/65536).
This made me less certain that switching to a JFET would improve anything, but I still decided to try it.
I added an LM311 comparator, and set its negative input to 0.118V using a 120k and a 4k7 resistor. This would assure that when the positive input was just slightly higher than 0V, the output would spike up to 15V, and when the input was 0V the output would be -15V - similarly to the yusynth circuit, where the comparator outputs a negative voltage to turn off the JFET. The circuit worked instantly (!), but as suspected, nothing changed.
So, now I guess I've ruled out the reset transistor as the cause of the offset. Also, the fact that the amplitude error changes when I swap the charging polarity, makes me believe that the cap is not at fault either (though I will still check this).
That leaves either the DAC (which is unlikely for the same reason as the CAP) or the opamp buffer.
It could also be that a small difference in the power lines (measured to +15.01 and -15.00 volts) could cause this, I don't know. I will try recalibrating and also try different opamps to see if that changes anything.
For reference: Here is the original breadboarded circuit with the 0 to -10V output. I have since added the missing 2R2 resistor, however, that changed nothing. The DAC is connected where the 20k pot is in this drawing
Tuesday, August 1, 2017
Yusynth VCO core discharging
I may have written about this before, but here goes:
The Yusynth saw core ramps down from 5 to 0v, then resets to 5v. the top of the charging cap is always at 5V while the bottom drops towards zero as current is sunk through the expo converter (u4).
The LM311 comparator has its positive leg grounded (when sync is not used). The LM311 is an open collector type comparator. For these, the rule, as written here, is that:
"Current WILL flow through the open collector when the voltage at the MINUS input is higher than the voltage at the PLUS input.
Current WILL NOT flow through the open collector when the voltage at the MINUS input is lower than the voltage at the PLUS input."
When current does NOT flow, the output is pulled towards 15V via R20. When current flows, the output is pulled towards -15V which is connected to pin 1 of U5.
During capacitor charging, the voltage at the minus input is positive and thus higher than the positive input. In this case, current flows and the output is negative.
Once the negative input reaches 0V, for an instant, the input is lower than the voltage at the positive input, and current stops flowing. The output is then pulled towards +15V.
This would mean that as long as the output is negative, the JFET transistor is switched off, and once the output is positive, the JFET conducts, resetting the cap.
This is in accordance with what wikipedia says about an n-channel JFET: "To switch off an n-channel device requires a negative gate-source"
The source and drain of the transistor will always be between 0V and 5V, thus a -15V gate voltage assures that it is turned off. Similarly, source and drain will never be above 5V, so a +15V will always turn it off.
Since the source/drain may however reach 0V, the emitter of the transistor cannot be connected to ground - this would leave the comparator output and thus the JFET gate at about 0.6V, which is not enough to keep it pinched off.
The Yusynth saw core ramps down from 5 to 0v, then resets to 5v. the top of the charging cap is always at 5V while the bottom drops towards zero as current is sunk through the expo converter (u4).
The LM311 comparator has its positive leg grounded (when sync is not used). The LM311 is an open collector type comparator. For these, the rule, as written here, is that:
"Current WILL flow through the open collector when the voltage at the MINUS input is higher than the voltage at the PLUS input.
Current WILL NOT flow through the open collector when the voltage at the MINUS input is lower than the voltage at the PLUS input."
When current does NOT flow, the output is pulled towards 15V via R20. When current flows, the output is pulled towards -15V which is connected to pin 1 of U5.
Pin 1 is called ground, but it can be connected to a lower voltage if necessary. |
During capacitor charging, the voltage at the minus input is positive and thus higher than the positive input. In this case, current flows and the output is negative.
Once the negative input reaches 0V, for an instant, the input is lower than the voltage at the positive input, and current stops flowing. The output is then pulled towards +15V.
This would mean that as long as the output is negative, the JFET transistor is switched off, and once the output is positive, the JFET conducts, resetting the cap.
This is in accordance with what wikipedia says about an n-channel JFET: "To switch off an n-channel device requires a negative gate-source"
The source and drain of the transistor will always be between 0V and 5V, thus a -15V gate voltage assures that it is turned off. Similarly, source and drain will never be above 5V, so a +15V will always turn it off.
Since the source/drain may however reach 0V, the emitter of the transistor cannot be connected to ground - this would leave the comparator output and thus the JFET gate at about 0.6V, which is not enough to keep it pinched off.
DCO: comparator to reset period
I've been thinking about ways to enable frequency changes in the middle of a period without having to restart the period, which resets the phase.
My thought so far has been to use a comparator with a reference voltage set to slightly higher than the maximum amplitude of the wave, and reset the period once it triggers. That way we can reset the timer and set a new charge voltage anytime, and it will just slightly overshoot the desired amplitude. I would think that this could be a good solution, but it has some issues.
1) The amplitude may be temperature sensitive - if the capacitor charge time varies with temperature or if the charging current changes due to temperature effects on the resistor.
2) Setting such a reset point requires a trimpot and a way to check that the point is not set too low, in which case it would interfer with the normal operation of the DCO
As for 1), that is just something that has to be tested. But in case 2), it would be possible to let the microcontroller figure out the cutoff point by itself. If the MCU controls the reference voltage, it can loop through all frequencies (or at least a subset) and find the maximum amplitude during normal operation, then set the reference voltage to slightly higher than this. It would also be possible to rerun this operation later if temperature rises. The cutoff point may be found either using an analog pin, or it can be done using the comparator and gradually lowering the reference voltage until the comparator triggers.
The MCU has a built in comparator. It also has a built in DAC that can generate a reference voltage, but its resolution is only 5 bits. We need something better than that, but we do not want to add another spi controlled DAC or similar.
A possible solution: Use the built-in PWM generator in conjunction with a lowpass filter to generate a DC voltage. This post, this article and this article has some filter suggestions.
My thought so far has been to use a comparator with a reference voltage set to slightly higher than the maximum amplitude of the wave, and reset the period once it triggers. That way we can reset the timer and set a new charge voltage anytime, and it will just slightly overshoot the desired amplitude. I would think that this could be a good solution, but it has some issues.
1) The amplitude may be temperature sensitive - if the capacitor charge time varies with temperature or if the charging current changes due to temperature effects on the resistor.
2) Setting such a reset point requires a trimpot and a way to check that the point is not set too low, in which case it would interfer with the normal operation of the DCO
As for 1), that is just something that has to be tested. But in case 2), it would be possible to let the microcontroller figure out the cutoff point by itself. If the MCU controls the reference voltage, it can loop through all frequencies (or at least a subset) and find the maximum amplitude during normal operation, then set the reference voltage to slightly higher than this. It would also be possible to rerun this operation later if temperature rises. The cutoff point may be found either using an analog pin, or it can be done using the comparator and gradually lowering the reference voltage until the comparator triggers.
The MCU has a built in comparator. It also has a built in DAC that can generate a reference voltage, but its resolution is only 5 bits. We need something better than that, but we do not want to add another spi controlled DAC or similar.
A possible solution: Use the built-in PWM generator in conjunction with a lowpass filter to generate a DC voltage. This post, this article and this article has some filter suggestions.
Subscribe to:
Posts (Atom)