Despite the limited screen real estate, smartphones are commonly used for text editing and annotation tasks. However, several everyday situations like texting on-the-go pose a challenge of maintaining the constant visual and hand engagement central to editing text on mobile devices. In these scenarios, users either sacrifice their comfort and/or their safety for maintaining the hand-eye coordination involved in typing on their device. In this report, I propose an eyes-free solution to the stated problem of text editing and annotating on mobile devices using speech as the simultaneous input and output modality. This is a novel mode of interaction with text. Hence, I conduct a first-time exploratory study of designing eyes-free interactions for mobile word processing. I establish the feasibility and desirability of my proposed approach and present EDITalk, a novel voice-based, eyes-free word processing interface. First, through an elicitation study, I investigate the viability of eyes-free word processing in the mobile context and explore relevant and desirable word processing operations for such scenarios. Results show that meta-level operations like highlight and comment, navigation operations like repeat current or repeat previous sentence and core operations like insert, delete and replace are desired by users in the eyes-free context. However, users were challenged by the lack of visual feedback and the cognitive load of remembering text while editing. I then study a commercial grade dictation application (Dragon Dictation) and uncover serious limitations that preclude comfortable speak-to-edit interactions. I address these limitations through EDITalk's interaction design enabling eyes-free operations of meta-level, navigation and core word processing operations in the mobile context.