2016 code overview + looking forward to 2017 Software 2016 code overview + looking forward to 2017
Overview Misc. 2016 code 2016 auto 2016 vision Vision in 2017
Miscellaneous - 2016 code We program with command based C++ We used navX-MXP for heading 8 motors 16 DIO ports https://github.com/4917EDSS/2016Repo
Auto A LOT of options in auton Could cross low bar, rough terrain, rock wall, ramparts, moat in auto Could shoot from position 1, position 2 (left or center shot), position 3 (center or right shot), position 4, and position 5 35 valid options Separated into 2 parts
Auto Code for crossing each defense Each defense crossing ends with back bumper aligned with defense From there, your shooting code can depend only on your position (not the defense crossed)
Auto - code autoDefenceOptions = new SendableChooser(); autoDefenceOptions->AddDefault("Do Nothing", new AutoDefaultGrp()); autoDefenceOptions->AddObject("Low Bar Defence", new AutoLowBarGrp()); autoDefenceOptions->AddObject("Ramparts Defence", new AutoRampartsGrp()); autoDefenceOptions->AddObject("Moat Defence", new AutoMoatGrp()); autoDefenceOptions->AddObject("Rock Wall Defence", new AutoRockWallGrp()); autoDefenceOptions->AddObject("Rough Terrain Defence", new AutoRoughTerrainGrp()); autoLocationOptions = new SendableChooser(); autoLocationOptions->AddDefault("Do Nothing", new AutoDefaultGrp()); autoLocationOptions->AddObject("Position 1 (Low Bar)", new AutoPosition1ShootGrp()); autoLocationOptions->AddObject("Position 2", new AutoPosition2ShootGrp()); autoLocationOptions->AddObject("Position 2 SHOOT LEFT", new AutoPosition2ShootLeftGrp()); autoLocationOptions->AddObject("Position 3", new AutoPosition3ShootGrp()); autoLocationOptions->AddObject("Position 3 SHOOT RIGHT", new AutoPosition3ShootRightGrp()); autoLocationOptions->AddObject("Position 4", new AutoPosition4ShootGrp()); autoLocationOptions->AddObject("Position 5", new AutoPosition5ShootGrp());
Vision - what we used it for Based on the position we saw the goal at, we could Adjust our turret left or right based on how far left/right we saw the goal Adjust our hood up or down to change release angle based on how high/low we saw the goal
Vision - what we wish we knew First time doing vision, basic tips Need ultra low exposure in addition to LEDs around camera (do this first, simplifies a LOT) Create a portable vision replica to tune with at competitions - Windsor Best to have second form of feedback on what vision controls (us: turret encoders) Camera needs to be on a mount that is impossible to warp (includes what mount itself mounts to) Mount camera to center if at all possible Reduce number of things you control with vision (up/down was unnecessary)
Vision - camera selection Started with standard Microsoft Lifecam HD-3000 Issues with autofocusing - solution required surgery Issues with keeping exposure low Ended up going with Axis M1011 camera Fixed focus Exposure settings worked Harder to mount Requires network config (much more painful than USB in FRC)
Vision - processing Decided on using a new program designed for FRC called GRIP Super simple way of creating vision pipeline
Vision - pipeline
Vision - on roboRIO First, we installed Java on the roboRIO GRIP then had a “deploy” option Takes current pipeline, turns it into Java process Automatically starts Java process on boot of roboRIO As seen in previous pipeline, GRIP pushes data to NetworkTables NetworkTables can be accessed within robot code std::shared_ptr<NetworkTable> gripTable = NetworkTable::GetTable("GRIP/myContoursReport"); std::vector<double> WidthArray = gripTable->GetNumberArray("width", llvm::ArrayRef<double>());
Vision - evaluating For the most part, we are quite happy with it Scored a good number of goals, did really well in auto Controlled on 2 axis GRIP was relatively simple, worked well without coprocessor Large delay from camera to robot reaction (0.5-1s) = slow shot Vision was very susceptible to being thrown off Vision was a separate program we needed to wait to start up Switching from practice bot -> real bot require many hours of recalibration
Vision in 2017 GRIP was a pilot project in 2016 GRIP has been a major focus of 2017 updates Deploy as a Java app now deprecated Replaced by “generate code” - will generate C++, Java, or Python OpenCV code This code will be directly callable by your robot program Gets around pain of installing Java, ensuring the NetworkTables are being published before the game starts Will be a lot more lightweight, most likely faster If we decide to use vision in 2017, we will be most likely be using this https://wpilib.screenstepslive.com/s/4485/m/24194/l/674733-using- generated-code-in-a-robot-program
Questions