"The dissonance theory suggests that we are rationalizing creatures rather than rational ones." Eysenck, Mindwatching.
It is sometimes supposed that the practical problem of "weakness of will" presents a larger theoretical problem: if someone didn't do something they claimed to think was right, doesn't that suggest that they didn't really think that it was right? Or if someone says they want to give up smoking, but then persist in their habit, doesn't that show that they don't really want to give up? More generally, if someone decides to do something and has a suitable opportunity, but does not do so, doesn't this imply that they didn't really decide to do it? My answer is: yes and no.
I think there are a number of potential mistakes occurring here. The first is to assume that we either want something, or we do not want it - e.g. that we cannot both want something to occur and want that same thing not to occur. I do not see any reason we should accept this assumption. There can be quite obvious cases of indirect conflict, such as a case where we want two incompatible things. Back to smoking - someone wants both the feeling they get from it, and they also want the health that smoking removes. Therefore, they both want to smoke and want not to smoke. (It may also be true that they want not to want to smoke, and perhaps even that they want not to want to not smoke.)
I think we can see moral weakness in much the same way, the only difference is that moral thinking is assumed to be a higher function in some way, i.e. that it involves the faculty of reason, and that it is therefore above such conflicts. While it is true that moral thinking involves reason, this does not imply that there is no conflict - for how strongly do we want to behave rationally? I do not necessarily mean that only desires can act as motivation (although the account is serviceable, provided it allows for desires for morality and rationality), and I certainly do not wish to suggest that the only reason we have for acting morally is our desire [see elsewhere for why], so let me put this a different way.
I cannot now remember getting dressed this morning. Whether, when I put my trousers on, I put my left or my right leg in first, I have no idea. Let me suggest why I don't know: I didn't think about it. I wasn't paying attention. Similarly, when I was out walking shortly afterwards, I was thinking about what I'd do when I arrived at my destination, I was not thinking "Now I shall move my left foot forward, now my right...". There seems to be a huge range of things we can do without thinking - sometimes whilst chatting I am not consciously aware of my response until after I've said it.
I would say that, in these sorts of situations, I am acting without reasoning. I do not mean unreasonably or irrationally - if someone suggested I had no reason to get dressed this morning, I would likely protest in strong terms. If I had not, I would've got cold, would've caused a scene when I went for a walk, and had I not put on my boots I might've cut my feet. Certainly there are good reasons why I should've got dressed this morning, but I can honestly say I did not think of them at the time. So, to a certain extent, my "reasons" for getting dressed this morning are nothing more than post-event rationalizations.
In light of these observations, there is something quite striking. We can act without reasoning. I do not mean that there is no thinking process going on when we do so, that the brain is inactive, only that the "higher" part of it, the (self-?) conscious, rational part (which performs the abstract reasoning?), the "voice" part when we are thinking in a language, is not engaged in the task. The act is being done by some other part of the brain. [1] So what determined my behaviour? Habit, I suppose. Perhaps there is some "put on trousers" pattern of activity which, once activated in my brain, does not need conscious [2] intervention to complete the task. [3] I shall not go into this any further, I believe I have shown all I need. The conclusion is that something other than my conscious self can control my actions - so the second mistake which might be made is to think that anything we want (in the sense of providing motivation or affecting behaviour) we must consciously want, i.e. that each part of the brain is aware of (and agrees with) the other parts.
So a very quick study of behaviour and our internal thoughts suggests very strongly that the brain exists as a number of more-or-less independently functional parts. [4] Once we have accepted this point, the (theoretical, but not practical) "problem" of akrasia is solved: it is no wonder that we do not always do what we consciously believe to be for the best, for it is not only the conscious part of our mind which controls our behaviour. How far does this go? I suggest very far: at many times, the more rational parts of our brain are little more than spectators of our behaviour... and we should not assume that our rational mind can override the irrational parts even if they would try to [5]. In fact, the reverse is probably more common. When reasons are considered and evaluated, the resulting (rational) decision may (if unliked) be overridden by an irrational part of the brain. The resulting conflict ("cognitive dissonance") has been shown to often result in a change in the perceived reasonability of the action. [6] Thus a utilitarian, particularly of a rule, indirect, or other type lacking rigour in the application of the felicific calculus, may quite consistently (and unwittingly) falsify the results of the calculus.
To progress against the practical problem of akrasia, I suggest we should take particular care to develop generally optimal habits and "patterns" of thought or behaviour, and perhaps we should sometimes practice breaking trivial habits simply in an attempt to "strengthen" our conscious will. [7] The generally optimal habits will, I suggest, include the very careful, deliberate, and rigorous application of the felicific calculus.
[1] I am reminded of Parfit's thoughts on personal identity - see particularly Reasons And Persons section 78 (on psychological connectedness and continuity).
[2] For the remainder of the paper I use "conscious" to mean such that the "self" or "voice" referred to above is aware of (or controlling) it. It would be possible for there to be more than one such self or voice in a brain and for them to be unaware of each other (cf. the split-brain experiments).
[3] See chapters 12 and 17 ("The Divided Brain" and "Doing two things at once") of Mindwatching by Hans and Michael Eysenck.
[4] "The different functional modules of the brain, however tightly integrated, do not simply interpenetrate." HI Section 2.1.
[5] See "The Science Of Happiness" by Stephen Braun, pages 71 and 72 on the "mismatch in the wiring between our emotions and our linear, rational, consciousness-generating pre-frontal cortex [which] means that the limbic system can physically outgun the prefrontal cortex with electrical signals".
[6] Mindwatching, chapter 19 ("Resolving Inner Conflict").
[7] Bentham says a man has "firmness of mind" if, "[when he has decided] not to disclose a fact, although he should be put to the rack, he perseveres in such resolution after the rack is brought into his presence, and even applied to him."