Since the ground-breaking 1965 paper by Juris Hartmanis and Richard E. Stearns[1] and the 1979 book by Michael Garey and David S. Johnson on NP-completeness,[2] the term "computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity.
Further, unless specified otherwise, the term "computational complexity" usually refers to the upper bound for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g..
Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega" notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ(n log n)).
A further tacit assumption is that the worst case analysis of computational complexity is in question unless stated otherwise.
An alternative approach is probabilistic analysis of algorithms.