Last updated:
0 purchases
Ragno 1.6
Ragno is a Passive URL Crawler | Written in Python3 | Fetches URLs from the Wayback Machine, AlienVault's Open Threat Exchange & Common Crawl
Disclaimer
:computer: This project was created only for good purposes and personal use.
THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. YOU MAY USE THIS SOFTWARE AT YOUR OWN RISK. THE USE IS COMPLETE RESPONSIBILITY OF THE END-USER. THE DEVELOPERS ASSUME NO LIABILITY AND ARE NOT RESPONSIBLE FOR ANY MISUSE OR DAMAGE CAUSED BY THIS PROGRAM.
Features
Works on Windows/Linux/MacOS
Passive Crawler (Does not intract with target directly)
Crawl URLs from 3 Sources i.e.
Crawl URLs from
Wayback Machine
Common Crawl
AlienVault's OTX (Open Threat Exchange)
DeepCrawl Feature (If Enabled, then Ragno try to fetch URLs from all 74+ CommonCrawl APIs)
MultiThreading (Only Used When DeepCrawl Feature is Enabled)
Result of Subdomains could be excluded & included via CommandLine Argument (i.e. -s)
Save Result in TXT File
Quiet Mode
How To Use in Linux
# Installing using pip
$ pip3 install Ragno
# Checking Help Menu
$ ragno --help
# Run Normal (Fast) Crawl
$ ragno -d target.com
# Run Normal (Fast) Crawl + Saving Result
$ ragno -d target.com -o result.txt
# Run Normal (Fast) Crawl + Saving Result + Quiet Mode (Without Showing URLs on screen)
$ ragno -d target.com -o result.txt -q
# Run Deep Crawl + Saving Result + Quiet Mode (Without Showing URLs on screen)
$ ragno -d target.com -o result.txt -q --deepcrawl
How To Use in Windows
# Install dependencies
$ Install latest python 3.x from Official Site (https://www.python.org/downloads/)
# Installing ragno using pip
$ pip install Ragno
# Checking Help Menu
$ ragno --help
# Run Normal (Fast) Crawl
$ ragno -d target.com
# Run Normal (Fast) Crawl + Saving Result
$ ragno -d target.com -o result.txt
# Run Normal (Fast) Crawl + Saving Result + Quiet Mode (Without Showing URLs on screen)
$ ragno -d target.com -o result.txt -q
# Run Deep Crawl + Saving Result + Quiet Mode (Without Showing URLs on screen)
$ ragno -d target.com -o result.txt -q --deepcrawl
Available Arguments
Optional Arguments
Short Hand
Full Hand
Description
-h
--help
show this help message and exit
-o OUTPUT
--output OUTPUT
Save Result in TXT file
-s
--subs
Include Result of Subdomains
-q
--quiet
Run Scan Without printing URLs on screen
--deepcrawl
Uses All Available APIs of CommonCrawl for Crawling URLs [Takes Time]
-t THREAD
--thread THREAD
Number of Threads to Used. Default=50 [Use When deepcrawl is Enabled]
Required Arguments
Short Hand
Full Hand
Description
-d DOMAIN
--domain DOMAIN
Target Domain Name, ex:- google.com
Use Cases
After Finding URLs, you can filter them on the basics of your attack & can Mass Hunt Particular vulnerabilites such as XSS, LFI, Open redirect, SSRF, etc
Example 1: One Liner for Hunting Open Redirect
Install qsreplace:
sudo wget https://github.com/tomnomnom/qsreplace/releases/download/v0.0.3/qsreplace-linux-amd64-0.0.3.tgz && sudo tar zvfx qsreplace-linux-amd64-0.0.3.tgz && sudo rm qsreplace-linux-amd64-0.0.3.tgz && sudo mv qsreplace /usr/bin/ && sudo chmod +x /usr/bin/qsreplace
Run One Liner
ragno -d testphp.vulnweb.com -q -o ragno_urls.txt && cat ragno_urls.txt | grep -a -i \=http | qsreplace "http://evil.com" | while read target_url do; do curl -s -L $target_url -I | grep "evil.com" && echo "[+] [Vulnerable] $target_url \n"; done
You can Use GF Tool by Tomnonnom, to filter URLs with juice parameters, and then you can test them further.
Contribute
All Contributors are welcome, this repo needs contributors who will improve this tool to make it best.
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.