Both of these are "connected" for the same reason. The reason is "engine depth." This refers to how many ply or moves deep the computer is calculating. The evaluations do not line up because they are running at different depths and/or adjusting the line evaluations as they calculate more stuff. If you let the computer run the analysis longer, then the numbers will line up more as they both come to the same conclusion
Computer evaluation number VS. lines VS. "best"


btw, a "move" in chess is when both sides make their move and a single "ply" is just one side moving. In a sample chess game, it might begin with:
1. e4 c5
This is move one. 1. e4 is the first ply and 1...c5 is the next ply. That means that 2 ply equals one move. Computers (generally) calculate a bunch because they can't "think" in terms of patterns like human players can.

That all makes sense — in the second image, is the depth used to calculate "best" deeper than the depth used to calculate the Computer Evaluation Number that's next to it? or is this calculated in some other way, a database maybe? These numbers don't seem to fluctuate like the numbers next to the lines.
Put another way, it looks like I could trust one of three sources:
1) the line numbers from the first image [1.15] c4 <---(shallow depth, 18 ply)
2) the evaluation number from the second image [1.10] c4 <---(unknown depth)
3) the "best" from the second image [.92] Nf3 <---(unknown depth)
is "best" really best?

I don't worry about the "good" / "excellent" / "best" distinction. A good move is a good move, regardless.
Even the moves marked "inaccuracy" are sometimes completely playable. (This is probably due to the engine's default analysis setting, which is relatively low.)
The main point, when analyzing, is to understand what you're doing in the position, what you should be doing, and why you should be doing it.
I only pay attention when the engine spots a blunder I made, or points out a tactic that I missed. The rest of the time, I disregard, and focus instead on logical plans.
Your mileage may vary, though.
I understand stockfish isn't foolproof because of limited processing capabilities for a site this large, etc.. What I could use help understanding is the mechanics behind chess.com report's "best" and the computer evaluation numbers listed in chess.com analysis.
There are a couple of discrepancies. One I kinda get and the other I totally don't get.
1) computer evaluation number on the analysis lines before the move doesn't match the computer analysis lines after. I'm guessing this is a problem with shallow depth?
2) the "best" move is clearly listed with a weaker computer evaluation number. Which do I trust? is the "best" move evaluated on a different engine?