Gathering information on an online target can be a time-consuming activity, especially if you only need specific information about a goal with many sub-domains. We can use a web brochure intended for OSINT called Photon to make the heavy lifting, by digitizing via URLs on our behalf to retrieve value information to a hacker.
All of this is used to learn as much as possible about the goal but to tip them off so that they look at. This regulates some of the more obvious methods of scanning and enumeration, which requires some creativity when looking for clues.
Knowing what to look for
Photon OSINT scanner fills this niche by providing a flexible, easy-to-use landing page line interface command. Instead of just looking for vulnerabilities, Photon quickly analyzes what's out there and shows it to the hacker in a way that is easy to understand.
One of the most useful photon functions is the ability to recognize and extract certain types of data automatically, such as page scripts, e-mail addresses and key passwords or API keys that may be mistaken.
Apart from looking at current web pages, Photon also lets you look at the past. You can use preserved previous web pages documented on the Wayback Machine as a "seed" for your search, scraping all the URLs of the current site as a source for additional crawling. While Photon effectively takes some patience and understanding of the many available filters, it doesn't take much to get started with clues about your goal
What You Need
Photon is a popular tool because it is the platform, which means that it will work on any system with Python installed. I find that crashing Python2 crashes, so I recommend running it with the python3 command before it, despite what the GitHub instructions say.
To check if your system has Python installed, you can open a terminal window and type python3 . If you do not have it installed, you can install it with apt-install python3 . If your production looks like below, you're ready to go.
Python 3.6.8 (default, Jan 3 2019, 03:42:36) [GCC 8.2.0] on linux Type "help", "copyright", "credit" or "license" for more information. >>>
Write () to end the Python shell, and we'll start installing what we need to run the Photon.
To get started with Photon, make sure you have installed Python3. When you do, we also need to install some dependencies. In a terminal window, run the following command to download and install the necessary libraries.
When this is done, you can download Photon and navigate to the directory with the following commands. Don't skip the line cd .
git clone https://github.com/s0md3v/Photon.git cd Photon
Now we can run python3 photon.py -h to see the list of options we can use to scan.
python3 photon.py -h
. ____ __ __ / __ / / _ ____ / / _____ ____ / / _ / / __ / __ / __ / __ / __ / ____ / / / / / _ / / / _ / / _ / / / / / / _ / / _ / / _ / ____ / __ / ____ / _ / / _ / v1.2.1 usage: photon.py [-h] [-u ROOT] [-c COOK] [-r REGEX] [-e EXPORT] [-o OUTPUT] [-l LEVEL] [-t THREADS] [-d DELAY] [-v] [-s SEEDS [SEEDS ...]] [--stdout STD] [--user-agent USER_AGENT] [--exclude EXCLUDE] [--timeout TIMEOUT] [--clone] [--headers] [--dns] [--ninja] [--keys] [--update] [--only-urls] [--wayback] optional argument: -h, - help Show this help message and exit -U ROOT, --url ROOT root url -c COOK, - Cookie COOK cake -R REGEX, --regex REGEX regex pattern -Export, - Export EXPORT exportformat output, - output output utdatarkatalog -L LEVEL, - LEVEL LEVEL levels to crawl - Threads, - Threads threads number of threads -d DELAY, - delay DELAY delay between requests -v, - verbose verbose output -s SEEDS [SEEDS ...] - frö SEEDS [SEEDS ...] additional seed URLs - Stdout STD send variables to stdout - User agent USER_AGENT custom user agent (s) - exclude EXCLUDE excluding URLs that match this rule - timeout TIMEOUT http request timeout - Clone the clone site locally - headers add headers - To list subdomains and DNS data -ninja ninja mode - Keys find secret keys - Update the update photo -only-urls only extracts URLs --wayback downloading archives from archive.org as seeds
To run the most basic scan, the formula is python3 photon.py-u target.com .
Step 3: Map DNS Information  One of the most useful and interesting features of Photon is the ability to generate a visual DNS map of everything linked to the domain. This gives you a great insight into the type of software running on the computers behind the targeted domain.
To do this, we run a scan with the flag – dns . To create a map of priceline.com, run the python3 photon.py -u priceline.com – dns command in a terminal window.
python3 photon.py -u https: //www.priceline.com/ - dns
The resulting subdomain art is huge! It is much too big to fit here, so we look at some segments. We can see servers and IP addresses associated with the Priceline service. Here is a lengthy view:
Furthermore, we can see third party integrity and other infrastructure linked to Priceline's services. This also gives us information on the mail servers they use and possibly poorly secured third-party services that we could benefit from in order to gain access. Again, this is a lengthy view:
Let's zoom in and look at the MX record, responsible for the email service. It is clear that it uses Google services and VeriSign.
Furthermore, we can zoom in and start seeing Larn, BigIP and nginx servers detected. Connected to a Digital Ocean account, we see a Ubuntu server running a specific version of openSSH. Hope it is not vulnerable.
We look more closely at Priceline's core services, we see Microsoft, Apache and Big IP systems. In some cases, we can see specific versions of the services that these IP addresses host.
All this is a gold memory for hackers looking for the most vulnerable system connected to the target.
Step 4: Remove Secret Keys & Intel
Next, let's try to capture some email addresses and keys from a site. We use the example of PBS.org.
To run the search, we add some other flags to increase the depth and speed of the search. In a terminal window we can run python3 photon.py -u pbs.org –keys -t 10 -l 3 to indicate that we want to go three levels deep of URLs and we want to open ten threads to make the creep. The results come back in a file called "intel", the first of which looks like this:
python3 photon.py -u https://www.pbs.org/ --keys -t 10 -l 3  firstname.lastname@example.org email@example.com firstname.lastname@example.org email@example.com firstname.lastname@example.org email@example.com firstname.lastname@example.org nstock_sales @ email@example.com firstname.lastname@example.org email@example.com firstname.lastname@example.org  We've caught some email addresses! We threw a fairly wide network for this search, so there may be many unrelated emails on our list. This is because we scraped three levels of URLs deeply and probably scratched some unrelated sites.
While we didn't find any keys to this scan, the flag we set will cause Photon to look for strings that may be API keys or other key details that may have been accidentally published on the target website.
Step 5: Requesting a Third Party Using the Ninja Mode
Let's say we're working from a sensitive IP address like a police station, government office or even just your home you don't want The goal of knowing is to visit their website. You can put the distance between yourself and the target using the - ninja flag that sends your requests to a third party's website, requests you, and forwards the reply.
The result is slower but eliminates the risk of the target identifying the IP address of the organization you are working for. Because you have less control over these requests, keep in mind that they can take much longer to complete.
To run a lighter version of the previous scan in the "ninja" mode, we were able to run the command python3 photos. py-u pbs.com - keys -t 10 -l 1 -ninja in a terminal window.
python3 photon.py -u https://www.pbs.com/ - keys -t 10 -l 1 -ninja
When it comes to crawling hundreds URLs for information, it is very rare that you want to do it yourself. Photon makes it easy to crawl large amounts of subdomains or multiple targets, so you can scale your research during the reconstruction. With the intelligent options built-in to analyze and search for types of data such as email addresses and key API keys, Photon can capture small mistakes a goal that displays very valuable information.
I hope you had this guide using Photon OSINT scanner to crawl sites for OSINT data! If you have any questions about this tutorial on web scraping, you have a comment, there are the comments below and feel free to contact me on Twitter @KodyKinzie .