The way the text explained it we can actually calculate SSA, SSW, and SST all separately; however, as you have explained, SSA + SSW = SST. The text showed a couple of examples where they actually calculated SST and SSW first. They then subtract SST-SSW which equal SSA. If we look at the calculations, assuming we are calculating this by hand, it is actually much easier to SSA since this formula is fairly simple. That leaves us the need to only calculate SSW. In reality, we do not need to calculate SST to find the F value, all we need is SSA and SSW. We use SSA and SSW to get our MSA and MSW values. We use the MSA and divide it by MSW to get the F statistic
The objective of ANOVA is to identify the differences between different groups/segments based on the variable under study. The terms mentioned (SSA, SSW, SST) basically represent different types of variation that exist among and within groups. In a ANOVA output, you would generally get SSA, SSW and SST. They are termed differently in different text books.
SSA / SSR / SSB - All these are the same depending upon which book you refer to. They mean - Sum of Squares Among the Groups (SSA), Regression Sum of Squares (SSR), Sum of Squares Between Groups (SSB). This basically means the variation that is present between different groups for the phenomenon that is under ...
The following posting discusses the difference is between SSW and SSA.