Answer
$SSE=0.75$
The better fit : case (a), where SSE was 0.5.
Work Step by Step
Residuals and Sum-of-Squares Error (SSE)
If we model a collection of data $(x_{1}, y_{1})$ , . , $(x_{n}, y_{n})$ with a linear equation $\hat{y}=mx+b$,
then the residuals are the $n$ quantities (Observed Value-Predicted Value):
$(y_{1}-\hat{y}_{1}), (y_{2}-\hat{y}_{2}), \ldots, (y_{n}-\hat{y}_{n})$ .
The sum-of-squares error (SSE) is the sum of the squares of the residuals:
SSE $=(y_{1}-\hat{y}_{1})^{2}+(y_{2}-\hat{y}_{2})^{2}+\cdots+(y_{n}-\hat{y}_{n})^{2}.$
The model with smaller SSE gives the better fit.
----
(b)
Build a table, column by column
\begin{array}{|cc|c|c|c|cc|}
\hline & x & y & y'=2x-1.5 & (y-y') & (y-y')^2 \\
\hline & 1 & 1 & 0.5 & 0.5 & 0.25 \\
& 2 & 2 & 2.5 & -0.5 & 0.25 \\
& 3 & 4 & 4.5 & -0.5 & 0.25 \\
& & & & & \\
\hline & & & & {\bf SSE}= & {\bf 0.75} \\\hline
\end{array}
The better fit : case (a), where SSE was 0.5.