"Intelligence officials investigating how Edward J. Snowden gained access to a huge trove of the country’s most highly classified documents say they have determined that he used inexpensive and widely available software to “scrape” the National Security Agency’s networks, and kept at it even after he was briefly challenged by agency officials.
Using “web crawler” software designed to search, index and back up a website, Mr. Snowden “scraped data out of our systems” while he went about his day job, according to a senior intelligence official. “We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said. The process, he added, was “quite automated...”
źródło: "The Washington Post"
...korzystając z odwołania w temacie do wpisu pana Henry Farella zamieszczonego na blogu "The Monkey Cage"...
"The New York Times article on how former NSA contractor Edward Snowden got data from the National Security Agency has gotten a lot of ridicule from tech people, who find its breathless references to exotic software such as “wget” (a commonly used Unix tool) to be hilarious. However, there is some quite interesting information in the piece. If the article is correct, Snowden used a Web crawler to trawl for links to relevant documents, targeting the NSA’s shared “wikis,” easily modifiable Web sites that use the same rough architecture as Wikipedia. So what are Wikipedia equivalents doing hidden in the internal systems of the NSA?..."
źródło: "The Monkey Cage"
...napiszę, że tylko dlatego, by zasadnie posłużyć się zamiast komentarza słowami H.L. Menckena będących mottem tej witryny...
...w żadnym razie nie w tandetnym zamiarze przypisywania sobie szczególnych właściwości odwołaniami do "The Washington Post", a tylko w kontynuacji wątku...
źródło: "Blogger T.L." - label "U.S. National Security Agency"
...z nadzieją na postawienie w końcu ostatecznie kropki w temacie jeszcze za mojego życia...
...tak sądzę.