This is how Leonard Digges described a telescope in 1571:
By concave and convex mirrors of circular [spherical] and parabolic forms, or by paires of them placed at due angles, and using the aid of transparent glasses which may break, or unite, the images produced by the reflection of the mirrors, there may be represented a whole region; also any part of it may be augmented so that a small object may be discerned as plainly as if it were close to the observer, though it may be as far distant as the eye can descrie. (source)
While it’s not clearly known who first invented the telescope – or if such an event even happened – Hans Lippershey is widely credited by historians for having installed two specially crafted lenses in a tube in 1608 “for seeing things far away as if they were nearby” (source). People would describe a telescope this way today as well. But the difference would be that this definition captures much less of the working of a telescope today than one built even a hundred years ago. For example, consider this description of how the CHIME (Canadian Hydrogen Intensity Mapping Experiment) radio telescope works:
To search for FRBs, CHIME will continuously scan 1024 separate points or “beams” on the sky 24/7. Each beam is sampled at 16,000 different frequencies and at a rate of 1000 times per second, corresponding to 130 billion “bits” of data per second to be sifted through in real time. The data are packaged in the X-engine and shipped via a high-speed network to the FRB backend search engine, which is housed in its own 40-foot shipping container under the CHIME telescope. The FRB search backend will consist of 128 compute nodes with over 2500 CPU cores and 32,000 GB of RAM. Each compute node will search eight individual beams for FRBs. Candidate FRBs are then passed to a second stage of processing which combines information from all 1024 beams to determine the location, distance and characteristics of the burst. Once an FRB event has been detected, an automatic alert will be sent, within seconds of the arrival of the burst, to the CHIME team and to the wider astrophysical community allowing for rapid follow up of the burst. (source)
I suppose this is the kind of advancement you’d expect in 400 years. And yes, I’m aware that I’ve compared an optical telescope to a radio telescope, but my point still stands. You’d see similar leaps between optical telescopes from 400 years ago and optical telescopes as they are today. I only picked the example of CHIME because I just found out about it.
Now, while the difference in sophistication is awesome, the detector component of CHIME itself looks like this:
The telescope has no moving parts. It will passively scan patches of the sky, record the data and send it for processing. How the recording happens is derived directly from a branch of physics that didn’t exist until the early 20th century: quantum mechanics. And because we had quantum mechanics, we knew what kind of instrument to build to intercept whatever information about the universe we needed. So the data-gathering part itself is not something we’re in awe of. We might have been able to put something resembling the CHIME detector together 50 years ago if someone had wanted us to.
What I think we’re really in awe of is how much data CHIME has been built to gather in unit time and how that data will be processed. In other words, what really makes this leap of four centuries evident is the computing power we have developed. This also means that, going ahead, improving on CHIME will mean improving the detector hardware a little and improving the processing software a lot. (According to the telescope’s website, the computers connected to CHIME will be able to process data with an input rate of 13 TB/s. That’s already massive.)