MasterWard Profile
  • Introduction
  • Media Links
  • Resume
  • HackThebox Notes
    • RedPanda
    • Metatwo
  • CTF Contest Writeups
    • 2017
      • Takoma Park CTF
      • TUCTF 2017
      • HITCON CTF 2017 Quals
      • CSAW CTF Qualification Round 2017
      • SEC-T CTF
      • Backdoor CTF 2017
      • Hack Dat Kiwi 2017
      • Kaspersky 2017
      • Hack.lu 2017
      • HackCon 2017
      • Defcamp 2017
      • Square CTF 2017
      • Mitre 2017
      • EKOPARTY CTF 2017
    • 2018
      • SEC-T CTF
      • Hackcon 2018
      • EasyCTF IV 2018
      • DefCamp CTF Qualifiers
      • PACTF 2018
      • CSAW CTF Qualifiers 2018
      • PicoCTF 2018
    • 2019
      • Newark Academy CTF 2019
      • Crypto CTF 2019
      • PicoCTF 2019
        • General Skills
        • Binary Exploitations
        • Forensics
        • Reverse Engineering
        • Cryptography
        • Web Exploitation
      • TAMUctf 19
    • 2021
      • picoCTF 2021
        • General Skills
        • Binary Exploitation
        • Forensics
        • Reverse Engineering
        • Cryptography
        • Web Exploitation
      • HackiHoli
      • S.H.E.L.L CTF
      • DawgCTF 2021
      • TCTF 2021
      • RedPwnCTF 2021
      • IJCTF 2021
      • UIUCTF 2021
      • Really Awesome CTF 2021
      • TMUCTF 2021
      • CSAW Qualifiers 2021
      • Pbjar CTF 2021
      • Deadface CTF 2021
    • 2022
      • NahamCon CTF 2022
      • BYUCTF 2022
      • DEF CON Qualifiers 2022
    • Useful Code
  • Software
    • Video Standardization and Compression
    • TOBIAS
    • Tracking Phone
    • Image Compression
    • Do Not Call Database
    • Color Blind Simulator
    • Gmail Unsubscriber
    • MP4 to GIF Converter
    • Optical Character Reading
    • Soft Jobs
    • OBD Project
    • Online Movie Finder
    • Work In Progress
      • Incremental Backup
      • Web Scraper - Wallpaper Edition
      • Web Blocker
      • File Manipulator
      • AppFiller
      • Cyber Security Projects
      • Bsaber AI
    • Ideas
      • CAN Programming
      • Malicious Programs
      • Remove Yourself from the Internet
      • DNA Classic
      • Auto Clicker
      • Adding Depth to a Video
      • Collage Mosaic Generator
      • Game Destroyer
      • Hearing aid Technology
      • Sign Language Recognition
      • Text Summarizer
      • Video to audio to text
      • Video Object Detection
      • VR demonstration
      • More Ideas to Elaborate on
    • Failure
      • Police Camera Radar
      • Already Created
      • Google Maps Game
      • Car price prediction
      • Bullshit Detector
      • Automated Code writter
      • Career Prediction
      • Samsung Remote Control Hack
      • Invalid Finder
      • PiHole Regex Filter
      • Group Archiver
  • Additional Articles
    • Cleaning Up a Computer Tricks
    • Getting started in Cyber Security
    • Speeding Up Your Internet
    • College Experience
    • Currently Writting
      • Reverse Engineering Notes
      • Bug Bounty Guide and Examples
      • OSCP help
      • Job Experience
      • Professional Job-Hunting Experience
Powered by GitBook
On this page
  • Origin
  • Overview
  • Step 1 - Downloading Data
  • Step 2 - Cleaning Data
  • Notable Links

Was this helpful?

  1. Software
  2. Work In Progress

Bsaber AI

Origin

So I know the community has made a lot of songs that go here, but I couldn't find NF and the only one on it was so hard. I was like maybe more people would play it if they could just put in any song and it would generate the songs. Also there was missing on Easy and some were only expert and by my opinion, not expert at all. I believe VR has capability as it is escaping reality, but we can only do that if there is a lot or appeals to more categories.

Overview

The objective is to have a model that takes the input of a audio file and output a full map.

Step 1 - Downloading Data

We need to first grab all the data from the api

import requests
import json

before = "2024-01-28T00%3A00%3A01%2B00%3A00"
continuing = True
while continuing:
    url = "https://api.beatsaver.com/maps/latest?automapper=false&before=" + before + "&pageSize=100"
    headers = {
        'Accept': 'application/json',
        'Accept-Charset': 'utf-8'
        }
    response = requests.get(url, headers)
    obj = response.content

    obj1 = json.loads(obj)
    maps = obj1['docs']
    for i in range(0, len(maps)):
        curMap = maps[i]
        map_id = curMap['id']
        metadata = curMap['metadata'] # Derminating
        stats = curMap['stats'] # Determine based on that
        uploadTime = curMap['uploaded'] # Used for next Setting it
        versions = curMap['versions'] # Data needed to keep
        if stats['upvotes'] > 10 and metadata['duration'] > 80:
            print(map_id, stats['upvotes'], stats['downvotes'], metadata['duration'])
            # Extract relevant fields
            map_data = {
                "id": curMap["id"],
                "metadata": curMap["metadata"],
                "stats": curMap["stats"],
                "uploaded": curMap["uploaded"],
                "versions": curMap["versions"]
            }

            # Write the extracted data to a JSON file
            output_file_name = map_id + ".json"
            with open(output_file_name, 'w') as json_file:
                json.dump(map_data, json_file, indent=4)

            print("Data written to", output_file_name)
#                        con += 1

    continuing = len(maps) > 2
    before = uploadTime
print("Finished on " , before)

So once this downloads all the json files it came up to around 50k json files. I then had to narrow it down and download what is necessary inforamtion. I choose parameters that left it to be around 10k for my dataset.

import json
import os
import requests

def downloadZip(url,id):
    output = "./downloads/" + str(id) + ".zip"
    r = requests.get(url)
    with open(output, 'wb') as f:
        f.write(r.content)

def compare_ignore_case(str1, str2):
    return str1.casefold() == str2.casefold()

def loads(filename):
    with open(filename) as file:
        data = json.load(file)
        # Print the data
        upvotes = data['stats']['upvotes']
        downvotes = data['stats']['downvotes']
        duration = data['metadata']['duration']
        per = (upvotes / (downvotes + upvotes)) * 100
        evenbpm = int(data['metadata']['bpm']) == data['metadata']['bpm']
        difficulties = [False,False,False,False,False]
        difficult = [False,False,False,False,True]
        sets = data['versions'][0]['diffs']
        for s in sets:
            if compare_ignore_case(s['difficulty'], 'easy'):
                difficulties[0] = True
            if compare_ignore_case(s['difficulty'], 'normal'):
                difficulties[1] = True
            if compare_ignore_case(s['difficulty'], 'hard'):
                difficulties[2] = True
            if compare_ignore_case(s['difficulty'], 'expert'):
                difficulties[3] = True
            if compare_ignore_case(s['difficulty'], 'expertplus'):
                difficulties[4] = True
        onlyExpertPlus = difficulties == difficult
        if upvotes > 100 and per > 80 and evenbpm and not onlyExpertPlus and duration < 250:
            dURL = data['versions'][0]['downloadURL']
            downloadZip(dURL, data['id'])
            return "True"
        return "False"

path = "."
file_list = os.listdir(path)
f = open('out.txt', 'w')
con = 0
for l in file_list:
    try:
        ans = loads(l)
        if ans == "True":
            con += 1
    except:
        print("Ran into issue on file" + l)
print('Downloaded ' + str(con) + ' Songs')

Step 2 - Cleaning Data

I then created a short script to remove files that weren't going to be useful.

import os

# Function to delete image and video files
def delete_media_files(folder):
    for root, dirs, files in os.walk(folder):
        for file in files:
            file_path = os.path.join(root, file)
            # Check if the file is an image or video file
            if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.bmp',
                                       '.tiff', '.svg', '.webp', '.ico',
                                       '.mp4', '.avi', '.mkv', '.mov', '.wmv', '.flv')):
                try:
                    os.remove(file_path)
                    print(f"Deleted: {file_path}")
                except Exception as e:
                    print(f"Error deleting {file_path}: {e}")

# Specify the directory to start traversing from
folder_to_search = input("Enter the directory path to start traversing from: ")

# Check if the specified directory exists
if os.path.exists(folder_to_search):
    # Confirm with the user before proceeding
    confirmation = input(f"Are you sure you want to delete all image and video files in {folder_to_search}? (yes/no): ")
    if confirmation.lower() == "yes":
        delete_media_files(folder_to_search)
        print("Operation completed.")
    else:
        print("Operation aborted by user.")
else:
    print("Directory not found.")

Notable Links

PreviousCyber Security ProjectsNextIdeas

Last updated 1 year ago

Was this helpful?

Created by in October 2017 was an OSU generator which is close to beatsaber in a way. It has clicking in a 2d field based on music so that could really help along the process.

Another that is on github written by kotritrona using tensorflow and deep learning hasn't been updated for 2 years so I assume it is the final product on version 7.0 Another one is the actual project after dance dance revolution game called . The idea has been proven to work and it has the same principle as 2 entities only issue is deciding in a 9 grid instead of 4 expanding the amount of combinations and patterns exponentially.

Useful website for verification of this that is highly liked in the community.

Two music analysis websites, and 2016 writting .

nick sypteras
OSU mapper
Dance Dance Convolution
wiki webpage
beat tracking
for music analysis